kubernetes-1.11.4集群部署

本文详细介绍在CentOS 7.0环境下搭建Kubernetes集群的过程,包括防火墙与SELINUX的禁用、Docker与kubeadm的安装配置、kubelet服务的启动、镜像仓库的准备、集群初始化与节点加入、Pod网络的安装、节点状态的检查等关键步骤。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

关闭防火墙

如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,可以查看Installing kubeadm中的”Check required ports”一节。 这里简单起见在各节点禁用防火墙:

centOS 7.0版本

sudo systemctl stop firewalld.service   #停止firewall
sudo systemctl disable firewalld.service #禁止firewall开机启动
sudo firewall-cmd --state             #查看防火墙状态

禁用SELINUX

sudo setenforce 0
sudo vi /etc/selinux/config
#SELINUX修改为disabled
SELINUX=disabled 

创建/etc/sysctl.d/k8s.conf文件

sudo vi /etc/sysctl.d/k8s.conf

添加如下内容:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

使修改生效,执行

sudo sysctl -p /etc/sysctl.d/k8s.conf

可能遇到问题—is an unknown key
报错

error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key

解决方法

sudo modprobe bridge
sudo lsmod |grep bridge
sudo sysctl -p /etc/sysctl.d/k8s.conf

可能遇到问题–sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录
报错

[root@localhost ~]# sysctl -p /etc/sysctl.d/k8s.conf
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 没有那个文件或目录

解决方法

modprobe br_netfilter
ls /proc/sys/net/bridge
sudo sysctl -p /etc/sysctl.d/k8s.conf

安装Docker

安装docker的yum源:

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

使用命令:
sudo yum list docker-ce.x86_64  --showduplicates |sort -r

输出如下:

[zzq@localhost ~]$ sudo yum list docker-ce.x86_64  --showduplicates |sort -r
 * updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror, priorities
 * extras: mirrors.aliyun.com
 * elrepo: mirrors.tuna.tsinghua.edu.cn
docker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.09.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.09.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.0.ce-1.el7.centos             docker-ce-stable
 * base: mirrors.aliyun.com
Available Packages
[zzq@localhost ~]$ 

Kubernetes 1.8已经针对Docker的1.11.2, 1.12.6, 1.13.1和17.03等版本做了验证。 因此我们这里在各节点安装docker的17.03.2版本。
使用命令如下:

sudo yum makecache fast

以及

sudo yum install -y --setopt=obsoletes=0 \
  docker-ce-17.03.2.ce-1.el7.centos \
  docker-ce-selinux-17.03.2.ce-1.el7.centos

可能遇到的问题

获取 GPG 密钥失败:[Errno 12] Timeout on https://download.docker.com/linux/centos/gpg: (28, 'Operation timed out after 30002 milliseconds with 0 out of 0 bytes received')

解决方法
一般超时原因都是网络问题,需要检查网络以及能够直接访问到这个资源比如使用命令:

curl -l https://download.docker.com/linux/centos/gpg

启动和停止docker

centOS7

sudo systemctl start docker

 

Docker从1.13版本开始调整了默认的防火墙规则,禁用了iptables filter表中FOWARD链,这样会引起Kubernetes集群中跨Node的Pod无法通信,在各个Docker节点执行下面的命令:

sudo iptables -P FORWARD ACCEPT

同时在docker的systemd unit文件中以ExecStartPost加入允许访问的代码,使用命令如下:

# 为docker服务创建一个systemd插件目录
mkdir -p /etc/systemd/system/docker.service.d
# 创建一个/etc/systemd/system/docker.service.d/port.conf配置文件
vi /etc/systemd/system/docker.service.d/port.conf

输入以下内容,保存退出:

ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT

重启docker服务

systemctl daemon-reload
systemctl restart docker

安装kubeadm和kubelet

下面在各节点安装kubeadm和kubelet,编辑资源库

vi   /etc/yum.repos.d/kubernetes.repo

输入如下内容:

[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

安装kubernetes

yum makecache fast
yum install -y kubelet-1.11.4 kubeadm-1.11.4 kubectl-1.11.4

安装完成后输出如下:

已安装:
  kubeadm.x86_64 0:1.11.4-0                                                                               kubectl.x86_64 0:1.11.4-0                                                                               kubelet.x86_64 0:1.11.4-0                                                                              

作为依赖被安装:
  cri-tools.x86_64 0:1.11.0-0                                                                            kubernetes-cni.x86_64 0:0.6.0-0                                                                            socat.x86_64 0:1.7.3.2-2.el7    

安装结果可以看出安装的依赖和版本。
如果我们要手动安装,也可以参考相应的依赖版本。

调整启动方式

kubelet的启动环境变量要与docker的cgroup-driver驱动一样。

查看docker的cgroup-driver驱动
使用命令

docker info
或者
docker info | grep -i cgroup

输出如下:

Cgroup Driver: cgroupfs

可以看出docker 17.03使用的Cgroup Driver为cgroupfs。

Kubernetes文档中kubelet的启动参数:

--cgroup-driver string Driver that the kubelet uses to manipulate cgroups on the host.
Possible values: 'cgroupfs', 'systemd' (default "cgroupfs")

默认值为cgroupfs,yum安装kubelet,kubeadm时生成10-kubeadm.conf文件中可能将这个参数值改成了systemd。

1.11.4版本的封装在/var/lib/kubelet/kubeadm-flags.env文件中
使用命令

cat /var/lib/kubelet/kubeadm-flags.env|grep "cgroup-driver"

如果没找到则是默认的cgroupfs,不需要修改。

如果输出如下则需要修改成一致的方式,即可以修改10-kubeadm.conf中的也可以修改docker的。

KUBELET_CGROUP_ARGS=--cgroup-driver=systemd

我们这里修改各节点docker的cgroup driver使其和kubelet一致

即修改或创建/etc/docker/daemon.json
使用命令

vi /etc/docker/daemon.json

加入下面的内容:

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

重启docker:

systemctl restart docker
systemctl status docker

处理swap

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。
可以通过kubelet的启动参数–fail-swap-on=false更改这个限制。

方式一关闭swap

关闭系统的Swap方法如下:

swapoff -a

同时还需要修改/etc/fstab文件,注释掉 SWAP 的自动挂载,防止机子重启后swap启用。
使用命令

vi /etc/fstab

输出如下:

#
# /etc/fstab
# Created by anaconda on Tue Jun 19 06:52:02 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=2d32a0e0-9bda-4a68-9abf-6a827a517177 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0

注释swap这一行后保存退出

#/dev/mapper/centos-swap swap                    swap    defaults        0 0

确认swap已经关闭,使用命令

free -m

swap输出为0则说明已经关闭,如下:

[root@localhost ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:            976         115         283           6         577         661
Swap:             0           0           0

k8s的swappiness参数调整,修改配置文件

vi /etc/sysctl.d/k8s.conf

添加下面一行:

vm.swappiness=0

执行

sysctl -p /etc/sysctl.d/k8s.conf

使修改生效。

方式二去掉swap的限制

因为主机上还运行其他服务,关闭swap可能会对其他服务产生影响,则修改kubelet的启动参数去掉这个限制。
修改/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
使用命令

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

加入内容:

Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
systemctl daemon-reload

启动kubelet服务

在各节点开机启动kubelet服务
使用命令

systemctl enable kubelet.service
systemctl start kubelet.service

准备镜像仓库

成功获取和push之后我们的镜像仓库registry.cn-qingdao.aliyuncs.com/joe-k8s/k8s中就有了相关的包。

如果没有国外的服务器节点,那我们就不能自由的定制需要的版本号镜像了。只能去找找别人已经做好的镜像仓库中有哪些版本,是否有在更新。
目前做的比较好的 持续更新的 k8s镜像仓库推荐 安家的。
Google Container Registry(gcr.io) 中国可用镜像(长期维护)
安家的github
镜像目录

这里的镜像目录https://hub.docker.com/u/anjia0532/与我们自己准备的镜像仓库registry.cn-qingdao.aliyuncs.com/joe-k8s/k8s 是同等的作用,下面会用到。

获取镜像

注意,安家的脚本因为修改过命名使用的pull.sh脚本如下:

#!/bin/bash
KUBE_VERSION=v1.11.4
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.2.18
DNS_VERSION=1.1.3
username=anjia0532

images=(google-containers.kube-proxy-amd64:${KUBE_VERSION}
google-containers.kube-scheduler-amd64:${KUBE_VERSION}
google-containers.kube-controller-manager-amd64:${KUBE_VERSION}
google-containers.kube-apiserver-amd64:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd-amd64:${ETCD_VERSION}
coredns:${DNS_VERSION}
    )

for image in ${images[@]}
do
    docker pull ${username}/${image}
    docker tag ${username}/${image} k8s.gcr.io/${image}
    #docker tag ${username}/${image} gcr.io/google_containers/${image}
    docker rmi ${username}/${image}
done

unset ARCH version images username

检查镜像
使用命令

docker images

发现已经有了需要的镜像

[root@k8s ~]# docker images
REPOSITORY                                                                 TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/google-containers.kube-apiserver-amd64                          v1.11.4             821507941e9c        6 days ago          187 MB
k8s.gcr.io/google-containers.kube-proxy-amd64                              v1.11.4             46a3cd725628        6 days ago          97.8 MB
k8s.gcr.io/google-containers.kube-controller-manager-amd64                 v1.11.4             38521457c799        6 days ago          155 MB
k8s.gcr.io/google-containers.kube-scheduler-amd64                          v1.11.4             37a1403e6c1a        6 days ago          56.8 MB
k8s.gcr.io/coredns                                                         1.1.3               b3b94275d97c        2 months ago        45.6 MB
k8s.gcr.io/etcd-amd64                                                      3.2.18              b8df3b177be2        4 months ago        219 MB
k8s.gcr.io/pause                                                           3.1                 da86e6ba6ca1        7 months ago        742 kB

注意 这里的前缀需要与kubeadm config images list时输出的前缀对应,否则init时仍然会识别不到去下载。
我们这里是k8s.gcr.io所以 脚本中pull 时重命名使用的是

docker tag ${username}/${image} k8s.gcr.io/${image}

如果使用的是anjia0532的版本则还需要调整一遍命名,使用命令如下:

docker tag  k8s.gcr.io/google-containers.kube-apiserver-amd64:v1.11.4   k8s.gcr.io/kube-apiserver-amd64:v1.11.4
docker tag  k8s.gcr.io/google-containers.kube-controller-manager-amd64:v1.11.4  k8s.gcr.io/kube-controller-manager-amd64:v1.11.4
docker tag  k8s.gcr.io/google-containers.kube-scheduler-amd64:v1.11.4  k8s.gcr.io/kube-scheduler-amd64:v1.11.4
docker tag k8s.gcr.io/google-containers.kube-proxy-amd64:v1.11.4   k8s.gcr.io/kube-proxy-amd64:v1.11.4

准备好的镜像如下,与kubeadm config images list时输出的镜像名称版本一致。

[root@k8s ~]# docker images
REPOSITORY                                                                 TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-controller-manager-amd64                                   v1.11.4             38521457c799        6 days ago          155 MB
k8s.gcr.io/kube-apiserver-amd64                                            v1.11.4             821507941e9c        6 days ago          187 MB
k8s.gcr.io/kube-proxy-amd64                                                v1.11.4             46a3cd725628        6 days ago          97.8 MB
k8s.gcr.io/kube-scheduler-amd64                                            v1.11.4             37a1403e6c1a        6 days ago          56.8 MB
k8s.gcr.io/coredns                                                         1.1.3               b3b94275d97c        2 months ago        45.6 MB
k8s.gcr.io/etcd-amd64                                                      3.2.18              b8df3b177be2        4 months ago        219 MB
k8s.gcr.io/pause                                                           3.1                 da86e6ba6ca1        7 months ago        742 kB

使用kubeadm init初始化集群 (只在主节点执行)

初始化前确认 kubelet启动和 cgroup-driver等方式是否对应。

接下来使用kubeadm初始化集群,选择k8s作为Master Node
确保没有设置http_proxy和https_proxy代理
在k8s上执行下面的命令:

kubeadm init   --kubernetes-version=v1.11.4   --pod-network-cidr=10.244.0.0/16   --apiserver-advertise-address=192.168.11.90 --token-ttl 0

–kubernetes-version根据上面安装成功时的提示:kubectl.x86_64 0:1.11.4-0对应版本。
因为我们选择flannel作为Pod网络插件,所以上面的命令指定–pod-network-cidr=10.244.0.0/16。
对于某些网络解决方案,Kubernetes master 也可以为每个节点分配网络范围(CIDR),这包括一些云提供商和 flannel。通过 –pod-network-cidr 参数指定的子网范围将被分解并发送给每个 node。这个范围应该使用最小的 /16,以让 controller-manager 能够给集群中的每个 node 分配 /24 的子网。如果我们是通过 这个 manifest 文件来使用 flannel,那么您应该使用 –pod-network-cidr=10.244.0.0/16。大部分基于 CNI 的网络解决方案都不需要这个参数。
–apiserver-advertise-address是apiserver的通信地址,一般使用master的ip地址。

通过kubeadm初始化后,都会提供节点加入k8s集群的token。默认token的有效期为24小时,当过期之后,该token就不可用了,需要重新生成token,会比较麻烦,这里–token-ttl设置为0表示永不过期。
token过期后重新创建token的方法看文末。

可能遇到的问题-卡在preflight/images

init过程中kubernetes会自动从google服务器中下载相关的docker镜像
kubeadm init过程首先会检查代理服务器,确定跟 kube-apiserver 的 https 连接方式,如果有代理设置,会提出警告。
如果一直卡在preflight/images

[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'

请查看上一小节,先准备相关镜像。

可能遇到的问题-uses proxy

如果报错

    [WARNING HTTPProxy]: Connection to "https://192.168.11.90" uses proxy "http://127.0.0.1:8118". If that is not intended, adjust your proxy settings
    [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://127.0.0.1:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
    [WARNING HTTPProxyCIDR]: connection to "10.244.0.0/16" uses proxy "http://127.0.0.1:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration

则设置no_proxy
使用命令

vi /etc/profile

在最后添加

export no_proxy='anjia0532,127.0.0.1,192.168.11.90,k8s.gcr.io,10.96.0.0/12,10.244.0.0/16'

使用命令让配置生效

source /etc/profile

更多init的错误可以新开一个控制台使用命令跟踪:

journalctl -f -u kubelet.service

可以查看具体的卡住原因。

可能遇到的问题-failed to read kubelet config file “/var/lib/kubelet/config.yaml”

8月 14 20:44:42 k8s systemd[1]: kubelet.service holdoff time over, scheduling restart.
8月 14 20:44:42 k8s systemd[1]: Started kubelet: The Kubernetes Node Agent.
8月 14 20:44:42 k8s systemd[1]: Starting kubelet: The Kubernetes Node Agent...
8月 14 20:44:42 k8s kubelet[5471]: F0814 20:44:42.477878    5471 server.go:190] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory

原因
关键文件缺失,多发生于没有做 kubeadm init就运行了systemctl start kubelet。
解决方法
先成功运行kubeadm init。

可能遇到的问题–cni config uninitialized

KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

原因
因为kubelet配置了network-plugin=cni,但是还没安装,所以状态会是NotReady,不想看这个报错或者不需要网络,就可以修改kubelet配置文件,去掉network-plugin=cni 就可以了。
解决方法

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

删除最后一行里的$KUBELET_NETWORK_ARGS
1.11.4版本的封装在/var/lib/kubelet/kubeadm-flags.env文件中
使用命令

[root@k8s ~]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni

重启kubelet

systemctl enable kubelet && systemctl start kubelet

重新初始化

kubeadm reset 
kubeadm init   --kubernetes-version=v1.11.4   --pod-network-cidr=10.244.0.0/16   --apiserver-advertise-address=192.168.11.90 --token-ttl 0

正确初始化输出如下:

[root@k8s ~]# kubeadm init   --kubernetes-version=v1.11.4   --pod-network-cidr=10.244.0.0/16   --apiserver-advertise-address=192.168.11.90 --token-ttl 0
[init] using Kubernetes version: v1.11.4
[preflight] running pre-flight checks
I0815 16:24:34.651252   21183 kernel_validator.go:81] Validating kernel version
I0815 16:24:34.651356   21183 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.90]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s localhost] and IPs [192.168.11.90 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 40.004083 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node k8s as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s" as an annotation
[bootstraptoken] using token: hu2clf.898he8fnu64w3fur
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.11.90:6443 --token hu2clf.898he8fnu64w3fur --discovery-token-ca-cert-hash sha256:2a196bbd77e4152a700d294a666e9d97336d0f7097f55e19a651c19e03d340a4

上面记录了完成的初始化输出的内容。

其中有以下关键内容:
生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
下面的命令是配置常规用户如何使用kubectl(客户端)访问集群,因为master节点也需要使用kubectl访问集群,所以也需要运行以下命令:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  •  

最后给出了将节点加入集群的命令

kubeadm join 192.168.11.90:6443 --token hu2clf.898he8fnu64w3fur --discovery-token-ca-cert-hash sha256:2a196bbd77e4152a700d294a666e9d97336d0f7097f55e19a651c19e03d340a4

查看一下集群状态如下:

kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}

确认每个组件都处于healthy状态。

可能遇到的问题–net/http: TLS handshake timeout

如果报错主节点的6443端口无法连接,超时,拒绝连接。需要检查网络,一般来说 是因我们我们设置了http_proxy和https_proxy做代理翻墙导致不能访问自身和内网。
需要在/etc/profile中增加no_proxy参数后重试,或者重新init。
如下:

vi /etc/profile

在最后增加

export no_proxy='anjia0532,127.0.0.1,192.168.11.90,192.168.11.91,192.168.11.92,k8s.gcr.io,10.96.0.0/12,10.244.0.0/16,localhost'

保存退出。
让配置生效

source /etc/profile

如果仍然无效,重新init,因为init时会帮助我们去检查网络情况,一般init时没有警告是没问题的。
使用命令:

kubeadm reset
kubeadm init

集群初始化如果遇到问题,可以使用下面的命令进行清理再重新初始化:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

安装Pod Network (只在主节点执行)

接下来安装flannel network add-on:

mkdir -p ~/k8s/
cd ~/k8s
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f  kube-flannel.yml

输出如下:

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

这里可以cat查看kube-flannel.yml这个文件里的flannel的镜像的版本

cat kube-flannel.yml|grep quay.io/coreos/flannel

如果Node有多个网卡的话,目前需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=

containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth1

使用kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态。

[root@k8s k8s]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                          READY     STATUS    RESTARTS   AGE       IP              NODE      NOMINATED NODE
kube-system   coredns-78fcdf6894-jf5tn      1/1       Running   0          6m        10.244.0.3      k8s       <none>
kube-system   coredns-78fcdf6894-ljmmh      1/1       Running   0          6m        10.244.0.2      k8s       <none>
kube-system   etcd-k8s                      1/1       Running   0          5m        192.168.11.90   k8s       <none>
kube-system   kube-apiserver-k8s            1/1       Running   0          5m        192.168.11.90   k8s       <none>
kube-system   kube-controller-manager-k8s   1/1       Running   0          5m        192.168.11.90   k8s       <none>
kube-system   kube-flannel-ds-amd64-fvpj7   1/1       Running   0          4m        192.168.11.90   k8s       <none>
kube-system   kube-proxy-c8rrg              1/1       Running   0          6m        192.168.11.90   k8s       <none>
kube-system   kube-scheduler-k8s            1/1       Running   0          5m        192.168.11.90   k8s       <none>

master node参与工作负载 (只在主节点执行)

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。

这里搭建的是测试环境可以使用下面的命令使Master Node参与工作负载:
k8s是master节点的hostname
允许master节点部署pod,使用命令如下:

kubectl taint nodes --all node-role.kubernetes.io/master-

输出如下:

node "k8s" untainted

输出error: taint “node-role.kubernetes.io/master:” not found错误忽略。

禁止master部署pod

kubectl taint nodes k8s node-role.kubernetes.io/master=true:NoSchedule

测试DNS (只在主节点执行)

使用命令

kubectl run curl --image=radial/busyboxplus:curl -i --tty

输出如下:

If you don't see a command prompt, try pressing enter.
[ root@curl-87b54756-vsf2s:/ ]$ 

进入后执行

nslookup kubernetes.default

确认解析正常,输出如下:

[ root@curl-87b54756-vsf2s:/ ]$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

输入

exit;

可退出image。

向Kubernetes集群添加Node (只在副节点执行)

下面我们将k8s1这个主机添加到Kubernetes集群中,在k8s1上执行之前的加入语句:

kubeadm join 192.168.11.90:6443 --token hu2clf.898he8fnu64w3fur --discovery-token-ca-cert-hash sha256:2a196bbd77e4152a700d294a666e9d97336d0f7097f55e19a651c19e03d340a4

正确加入输出如下:

[root@k8s1 ~]# kubeadm join 192.168.11.90:6443 --token hu2clf.898he8fnu64w3fur --discovery-token-ca-cert-hash sha256:2a196bbd77e4152a700d294a666e9d97336d0f7097f55e19a651c19e03d340a4
[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0815 16:53:55.675009    2758 kernel_validator.go:81] Validating kernel version
I0815 16:53:55.675090    2758 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "192.168.11.90:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.11.90:6443"
[discovery] Requesting info from "https://192.168.11.90:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.11.90:6443"
[discovery] Successfully established connection with API Server "192.168.11.90:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

下面在master节点上执行命令查看集群中的节点:

[root@k8s k8s]# kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
k8s       Ready      master    30m       v1.11.4
k8s1      NotReady   <none>    1m        v1.11.4
k8s2      NotReady   <none>    1m        v1.11.4
[root@k8s k8s]# 

如果副节点是NotReady,可以使用命令检查是否有报错

systemctl status kubelet.service

使用命令检查pod是否是running状态

[root@k8s kubernetes]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                          READY     STATUS              RESTARTS   AGE       IP              NODE      NOMINATED NODE
default       curl-87b54756-vsf2s           1/1       Running             1          1h        10.244.0.4      k8s       <none>
kube-system   coredns-78fcdf6894-jf5tn      1/1       Running             0          2h        10.244.0.3      k8s       <none>
kube-system   coredns-78fcdf6894-ljmmh      1/1       Running             0          2h        10.244.0.2      k8s       <none>
kube-system   etcd-k8s                      1/1       Running             0          2h        192.168.11.90   k8s       <none>
kube-system   kube-apiserver-k8s            1/1       Running             0          2h        192.168.11.90   k8s       <none>
kube-system   kube-controller-manager-k8s   1/1       Running             0          2h        192.168.11.90   k8s       <none>
kube-system   kube-flannel-ds-amd64-8p2px   0/1       Init:0/1            0          1h        192.168.11.91   k8s1      <none>
kube-system   kube-flannel-ds-amd64-fvpj7   1/1       Running             0          2h        192.168.11.90   k8s       <none>
kube-system   kube-flannel-ds-amd64-p6g4w   0/1       Init:0/1            0          1h        192.168.11.92   k8s2      <none>
kube-system   kube-proxy-c8rrg              1/1       Running             0          2h        192.168.11.90   k8s       <none>
kube-system   kube-proxy-hp4lj              0/1       ContainerCreating   0          1h        192.168.11.92   k8s2      <none>
kube-system   kube-proxy-tn2fl              0/1       ContainerCreating   0          1h        192.168.11.91   k8s1      <none>
kube-system   kube-scheduler-k8s            1/1       Running             0          2h        192.168.11.90   k8s       <none>

如果卡在ContainerCreating状态和Init:0/1一般也是副节点的镜像获取不到的问题,请回到 准备镜像小节。
准备好镜像之后一般很快就会变成running状态如下:

[root@k8s kubernetes]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                          READY     STATUS    RESTARTS   AGE       IP              NODE      NOMINATED NODE
default       curl-87b54756-vsf2s           1/1       Running   1          1h        10.244.0.4      k8s       <none>
kube-system   coredns-78fcdf6894-jf5tn      1/1       Running   0          2h        10.244.0.3      k8s       <none>
kube-system   coredns-78fcdf6894-ljmmh      1/1       Running   0          2h        10.244.0.2      k8s       <none>
kube-system   etcd-k8s                      1/1       Running   0          2h        192.168.11.90   k8s       <none>
kube-system   kube-apiserver-k8s            1/1       Running   0          2h        192.168.11.90   k8s       <none>
kube-system   kube-controller-manager-k8s   1/1       Running   0          2h        192.168.11.90   k8s       <none>
kube-system   kube-flannel-ds-amd64-8p2px   1/1       Running   0          1h        192.168.11.91   k8s1      <none>
kube-system   kube-flannel-ds-amd64-fvpj7   1/1       Running   0          2h        192.168.11.90   k8s       <none>
kube-system   kube-flannel-ds-amd64-p6g4w   1/1       Running   0          1h        192.168.11.92   k8s2      <none>
kube-system   kube-proxy-c8rrg              1/1       Running   0          2h        192.168.11.90   k8s       <none>
kube-system   kube-proxy-hp4lj              1/1       Running   0          1h        192.168.11.92   k8s2      <none>
kube-system   kube-proxy-tn2fl              1/1       Running   0          1h        192.168.11.91   k8s1      <none>
kube-system   kube-scheduler-k8s            1/1       Running   0          2h        192.168.11.90   k8s       <none>

此时查看nodes也已经变成了ready状态

[root@k8s kubernetes]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
k8s       Ready     master    2h        v1.11.4
k8s1      Ready     <none>    1h        v1.11.4
k8s2      Ready     <none>    1h        v1.11.4
[root@k8s kubernetes]# 

如果副节点也需要使用kubectl命令则需要把conf文件复制过去,使用命令

mkdir -p $HOME/.kube
sudo cp -i  admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

如何从集群中移除Node

如果需要从集群中移除k8s2这个Node执行下面的命令:

在master节点上执行:

kubectl drain k8s2 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s2

在k8s2上执行:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

到这里我们就算成功安装好了k8s集群了。

[root@localhost opt]# ./download_k8s_offline.sh ======================================== Kubernetes 离线部署包生成脚本 (v3.1) ======================================== 🔄 配置系统仓库... Repository epel is listed more than once in the configuration Docker CE Stable 9.6 kB/s | 3.5 kB 00:00 Kubernetes 5.3 kB/s | 1.4 kB 00:00 软件包 epel-release-8-22.el8.noarch 已安装。 依赖关系解决。 无需任何处理。 完毕! 🔄 创建 yum 缓存(不使用 fast 参数)... Repository epel is listed more than once in the configuration 38 文件已删除 Repository epel is listed more than once in the configuration CentOS-8 - BaseOS - mirrors.aliyun.com 4.2 MB/s | 4.6 MB 00:01 CentOS-8 - AppStream - mirrors.aliyun.com 6.0 MB/s | 8.4 MB 00:01 CentOS-8 - Extras - mirrors.aliyun.com 23 kB/s | 10 kB 00:00 Extra Packages for Enterprise Linux 8 - x86_64 7.1 MB/s | 14 MB 00:01 Docker CE Stable 145 kB/s | 79 kB 00:00 Kubernetes 297 kB/s | 182 kB 00:00 元数据缓存已建立。 ✅ 仓库配置完成 🛠️ 安装必要工具... Repository epel is listed more than once in the configuration 上次元数据过期检查:0:00:09 前,执行于 2025年07月25日 星期五 04时51分38秒。 软件包 yum-utils-4.0.21-3.el8.noarch 已安装。 软件包 createrepo_c-0.17.2-3.el8.x86_64 已安装。 软件包 wget-1.19.5-10.el8.x86_64 已安装。 软件包 curl-7.61.1-22.el8.x86_64 已安装。 软件包 jq-1.5-12.el8.x86_64 已安装。 依赖关系解决。 无需任何处理。 完毕! ✅ 工具安装完成 🔍 检测可用 Kubernetes 版本... 📋 可用 Kubernetes 版本: 1) 1.33.3 2) 1.33.2 3) 1.33.1 4) 1.33.0 5) 1.32.7 6) 1.32.6 7) 1.32.5 8) 1.32.4 9) 1.32.3 10) 1.32.2 11) 1.28.2-0 12) 1.28.1-0 13) 1.28.0-0 14) 1.27.6-0 15) 1.27.5-0 16) 1.27.4-0 17) 1.27.3-0 18) 1.27.2-0 19) 1.27.1-0 20) 1.27.0-0 21) 1.26.9-0 22) 1.26.8-0 23) 1.26.7-0 24) 1.26.6-0 25) 1.26.5-0 26) 1.26.4-0 27) 1.26.3-0 28) 1.26.2-0 29) 1.26.1-0 30) 1.26.0-0 31) 1.25.14-0 32) 1.25.13-0 33) 1.25.12-0 34) 1.25.11-0 35) 1.25.10-0 36) 1.25.9-0 37) 1.25.8-0 38) 1.25.7-0 39) 1.25.6-0 40) 1.25.5-0 41) 1.25.4-0 42) 1.25.3-0 43) 1.25.2-0 44) 1.25.1-0 45) 1.25.0-0 46) 1.24.17-0 47) 1.24.16-0 48) 1.24.15-0 49) 1.24.14-0 50) 1.24.13-0 51) 1.24.12-0 52) 1.24.11-0 53) 1.24.10-0 54) 1.24.9-0 55) 1.24.8-0 56) 1.24.7-0 57) 1.24.6-0 58) 1.24.5-0 59) 1.24.4-0 60) 1.24.3-0 61) 1.24.2-0 62) 1.24.1-0 63) 1.24.0-0 64) 1.23.17-0 65) 1.23.16-0 66) 1.23.15-0 67) 1.23.14-0 68) 1.23.13-0 69) 1.23.12-0 70) 1.23.11-0 71) 1.23.10-0 72) 1.23.9-0 73) 1.23.8-0 74) 1.23.7-0 75) 1.23.6-0 76) 1.23.5-0 77) 1.23.4-0 78) 1.23.3-0 79) 1.23.2-0 80) 1.23.1-0 81) 1.23.0-0 82) 1.22.17-0 83) 1.22.16-0 84) 1.22.15-0 85) 1.22.14-0 86) 1.22.13-0 87) 1.22.12-0 88) 1.22.11-0 89) 1.22.10-0 90) 1.22.9-0 91) 1.22.8-0 92) 1.22.7-0 93) 1.22.6-0 94) 1.22.5-0 95) 1.22.4-0 96) 1.22.3-0 97) 1.22.2-0 98) 1.22.1-0 99) 1.22.0-0 100) 1.21.14-0 101) 1.21.13-0 102) 1.21.12-0 103) 1.21.11-0 104) 1.21.10-0 105) 1.21.9-0 106) 1.21.8-0 107) 1.21.7-0 108) 1.21.6-0 109) 1.21.5-0 110) 1.21.4-0 111) 1.21.3-0 112) 1.21.2-0 113) 1.21.1-0 114) 1.21.0-0 115) 1.20.15-0 116) 1.20.14-0 117) 1.20.13-0 118) 1.20.12-0 119) 1.20.11-0 120) 1.20.10-0 121) 1.20.9-0 122) 1.20.8-0 123) 1.20.7-0 124) 1.20.6-0 125) 1.20.5-0 126) 1.20.4-0 127) 1.20.2-0 128) 1.20.1-0 129) 1.20.0-0 130) 1.19.16-0 131) 1.19.15-0 132) 1.19.14-0 133) 1.19.13-0 134) 1.19.12-0 135) 1.19.11-0 136) 1.19.10-0 137) 1.19.9-0 138) 1.19.8-0 139) 1.19.7-0 140) 1.19.6-0 141) 1.19.5-0 142) 1.19.4-0 143) 1.19.3-0 144) 1.19.2-0 145) 1.19.1-0 146) 1.19.0-0 147) 1.18.20-0 148) 1.18.19-0 149) 1.18.18-0 150) 1.18.17-0 151) 1.18.16-0 152) 1.18.15-0 153) 1.18.14-0 154) 1.18.13-0 155) 1.18.12-0 156) 1.18.10-0 157) 1.18.9-0 158) 1.18.8-0 159) 1.18.6-0 160) 1.18.5-0 161) 1.18.4-1 162) 1.18.4-0 163) 1.18.3-0 164) 1.18.2-0 165) 1.18.1-0 166) 1.18.0-0 167) 1.17.17-0 168) 1.17.16-0 169) 1.17.15-0 170) 1.17.14-0 171) 1.17.13-0 172) 1.17.12-0 173) 1.17.11-0 174) 1.17.9-0 175) 1.17.8-0 176) 1.17.7-1 177) 1.17.7-0 178) 1.17.6-0 179) 1.17.5-0 180) 1.17.4-0 181) 1.17.3-0 182) 1.17.2-0 183) 1.17.1-0 184) 1.17.0-0 185) 1.16.15-0 186) 1.16.14-0 187) 1.16.13-0 188) 1.16.12-0 189) 1.16.11-1 190) 1.16.11-0 191) 1.16.10-0 192) 1.16.9-0 193) 1.16.8-0 194) 1.16.7-0 195) 1.16.6-0 196) 1.16.5-0 197) 1.16.4-0 198) 1.16.3-0 199) 1.16.2-0 200) 1.16.1-0 201) 1.16.0-0 202) 1.15.12-0 203) 1.15.11-0 204) 1.15.10-0 205) 1.15.9-0 206) 1.15.8-0 207) 1.15.7-0 208) 1.15.6-0 209) 1.15.5-0 210) 1.15.4-0 211) 1.15.3-0 212) 1.15.2-0 213) 1.15.1-0 214) 1.15.0-0 215) 1.14.10-0 216) 1.14.9-0 217) 1.14.8-0 218) 1.14.7-0 219) 1.14.6-0 220) 1.14.5-0 221) 1.14.4-0 222) 1.14.3-0 223) 1.14.2-0 224) 1.14.1-0 225) 1.14.0-0 226) 1.13.12-0 227) 1.13.11-0 228) 1.13.10-0 229) 1.13.9-0 230) 1.13.8-0 231) 1.13.7-0 232) 1.13.6-0 233) 1.13.5-0 234) 1.13.4-0 235) 1.13.3-0 236) 1.13.2-0 237) 1.13.1-0 238) 1.13.0-0 239) 1.12.10-0 240) 1.12.9-0 241) 1.12.8-0 242) 1.12.7-0 243) 1.12.6-0 244) 1.12.5-0 245) 1.12.4-0 246) 1.12.3-0 247) 1.12.2-0 248) 1.12.1-0 249) 1.12.0-0 250) 1.11.10-0 251) 1.11.9-0 252) 1.11.8-0 253) 1.11.7-0 254) 1.11.6-0 255) 1.11.5-0 256) 1.11.4-0 257) 1.11.3-0 258) 1.11.2-0 259) 1.11.1-0 260) 1.11.0-0 261) 1.10.13-0 262) 1.10.12-0 263) 1.10.11-0 264) 1.10.10-0 265) 1.10.9-0 266) 1.10.8-0 267) 1.10.7-0 268) 1.10.6-0 269) 1.10.5-0 270) 1.10.4-0 271) 1.10.3-0 272) 1.10.2-0 273) 1.10.1-0 274) 1.10.0-0 275) 1.9.11-0 276) 1.9.10-0 277) 1.9.9-0 278) 1.9.8-0 279) 1.9.7-0 280) 1.9.6-0 281) 1.9.5-0 282) 1.9.4-0 283) 1.9.3-0 284) 1.9.2-0 285) 1.9.1-0 286) 1.9.0-0 287) 1.8.15-0 288) 1.8.14-0 289) 1.8.13-0 290) 1.8.12-0 291) 1.8.11-0 292) 1.8.10-0 293) 1.8.9-0 294) 1.8.8-0 295) 1.8.7-0 296) 1.8.6-0 297) 1.8.5-1 298) 1.8.5-0 299) 1.8.4-1 300) 1.8.4-0 301) 1.8.3-1 302) 1.8.3-0 303) 1.8.2-1 304) 1.8.2-0 305) 1.8.1-1 306) 1.8.1-0 307) 1.8.0-1 308) 1.8.0-0 309) 1.7.16-0 310) 1.7.15-0 311) 1.7.14-0 312) 1.7.11-1 313) 1.7.11-0 314) 1.7.10-1 315) 1.7.10-0 316) 1.7.9-1 317) 1.7.9-0 318) 1.7.8-2 319) 1.7.8-1 320) 1.7.7-2 321) 1.7.7-1 322) 1.7.6-2 323) 1.7.6-1 324) 1.7.5-1 325) 1.7.5-0 326) 1.7.4-1 327) 1.7.4-0 328) 1.7.3-2 329) 1.7.3-1 330) 1.7.2-1 331) 1.7.2-0 332) 1.7.1-1 333) 1.7.1-0 334) 1.7.0-1 335) 1.7.0-0 336) 1.6.13-1 337) 1.6.13-0 338) 1.6.12-1 339) 1.6.12-0 340) 1.6.11-1 341) 1.6.11-0 342) 1.6.10-1 343) 1.6.10-0 344) 1.6.9-1 345) 1.6.9-0 346) 1.6.8-1 347) 1.6.8-0 348) 1.6.7-1 349) 1.6.7-0 350) 1.6.6-1 351) 1.6.6-0 352) 1.6.5-1 353) 1.6.5-0 354) 1.6.4-1 355) 1.6.4-0 356) 1.6.3-1 357) 1.6.3-0 358) 1.6.2-1 359) 1.6.2-0 360) 1.6.1-1 361) 1.6.1-0 362) 1.6.0-1 363) 1.6.0-0 364) 1.5.4-1 365) 1.5.4-0 👉 请选择版本号 [1-365, 默认1]: 11 ✅ 已选择版本: Kubernetes v1.28.2 📦 下载Kubernetes组件... ⬇️ 下载: kubelet-1.28.2-0 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.el7.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.el7.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.el7.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.x86_64.rpm 🔄 使用yumdownloader下载: kubelet Repository epel is listed more than once in the configuration 错误:未知仓库:'k8s-*' mv: '/opt/k8s-offline/packages/kubelet-1.28.2-0.rpm' 与'/opt/k8s-offline/packages/kubelet-1.28.2-0.rpm' 为同一文件 ✅ yumdownloader下载成功: kubelet-1.28.2-0.rpm ⬇️ 下载: kubeadm-1.28.2-0 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.el7.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.el7.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.el7.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.x86_64.rpm 🔄 使用yumdownloader下载: kubeadm Repository epel is listed more than once in the configuration 错误:未知仓库:'k8s-*' mv: '/opt/k8s-offline/packages/kubeadm-1.28.2-0.rpm' 与'/opt/k8s-offline/packages/kubeadm-1.28.2-0.rpm' 为同一文件 ✅ yumdownloader下载成功: kubeadm-1.28.2-0.rpm ⬇️ 下载: kubectl-1.28.2-0 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.el7.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.el7.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.el7.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.x86_64.rpm 🔄 使用yumdownloader下载: kubectl Repository epel is listed more than once in the configuration 错误:未知仓库:'k8s-*' mv: '/opt/k8s-offline/packages/kubectl-1.28.2-0.rpm' 与'/opt/k8s-offline/packages/kubectl-1.28.2-0.rpm' 为同一文件 ✅ yumdownloader下载成功: kubectl-1.28.2-0.rpm ⬇️ 下载: cri-tools-1.26.0 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.el7.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.el7.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.el8.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.el7.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.el7.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.el8.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.el7.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.el7.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.el8.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.x86_64.rpm 🔄 使用yumdownloader下载: cri-tools Repository epel is listed more than once in the configuration 错误:未知仓库:'k8s-*' mv: '/opt/k8s-offline/packages/cri-tools-1.26.0.rpm' 与'/opt/k8s-offline/packages/cri-tools-1.26.0.rpm' 为同一文件 ✅ yumdownloader下载成功: cri-tools-1.26.0.rpm ✅ Kubernetes组件下载完成 🐳 下载Docker组件... 根据日志重新生成所有脚本
最新发布
07-26
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值