Kubernetes-手动部署1.13.0

本文详细介绍Kubernetes集群的手动搭建过程,从初始化环境、安装各组件到配置网络,再到部署CoreDNS,提供了一套完整的Kubernetes集群搭建方案。

目录

一、初始化

1. 主机名设置(所有节点)

2. 免秘钥登录(步骤略)

3. 系统配置初始化(所有节点)

4. 初始化证书(只在k8s-node1执行)

二、安装

1. Etcd安装配置(master节点上执行)

2. Flannel安装配置

3. Docker安装配置

4. Kubernetes 安装配置


目前社区提供了很多工具自动化的安装部署Kubernetes,包括kubespray, kubeadm等

我们之前还是手动部署的,此次做记录,本次安装各个组件的版本

组件版本说明
etcd3.3.11 
kubernetes1.13.0 
docker-ce18.06.1.ce 
flannel0.11.0 

 

 

 

 

 

 

一、初始化

1. 主机名设置(所有节点)

本次三个Node节点,其中k8s-node1同时作为master和node

k8s-node1172.16.9.201master,node
k8s-node2172.16.9.202node
k8s-node3172.16.9.203node

 

 

 

 

在/etc/hosts中加入节点信息

172.16.9.201 k8s-node1
172.16.9.202 k8s-node2
172.16.9.203 k8s-node3

2. 免秘钥登录(步骤略)

3. 系统配置初始化(所有节点)

k8s的配置

cat > /etc/sysctl.d/kubernetes.conf <<-EOF
net.ipv4.ip_forward = 1
net.ipv4.conf.all.route_localnet = 1
# in case that arp cache overflow in a latget cluster!
net.ipv4.neigh.default.gc_thresh1 = 70000
net.ipv4.neigh.default.gc_thresh2 = 80000
net.ipv4.neigh.default.gc_thresh3 = 90000
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.file-max = 65535
# es requires vm.max_map_count to be at least 262144.
vm.max_map_count = 262144
# kubelet requires swap off.
# https://github.com/kubernetes/kubernetes/issues/53533
vm.swappiness = 0
EOF

sysctl -p /etc/sysctl.d/kubernetes.conf

nginx配置(我们使用nginx作为反向代理)

cat > /etc/sysctl.d/nginx.conf <<-EOF
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 262144
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syn_retries = 1
EOF

sysctl -p /etc/sysctl.d/nginx.conf

系统配置

swapoff -a
sed -i -r 's|^\S+\s+swap\s+swap.*|# &|' /etc/fstab

# modify the maxium number of files that can be opened by process
# to avoid the nginx process of 'nginx-ingress-controller'
# failed to set 'worker_rlimit_nofile' to '94520' in 0.12.0+
sed -i -r '/^\* (soft|hard) nofile/d' /etc/security/limits.conf
echo "* soft nofile 100000" >> /etc/security/limits.conf
echo "* hard nofile 200000" >> /etc/security/limits.conf

systemctl disable firewalld.service
systemctl stop firewalld.service

# clean up the existed iptables rules.
iptables -F && iptables -F -t nat
iptables -X && iptables -X -t nat

sed -i -r 's|^(SELINUX=).*|\1disabled|' /etc/selinux/config
setenforce 0

4. 初始化证书(只在k8s-node1执行)

下载软件

curl "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -o "/usr/local/bin/cfssl"
curl "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -o "/usr/local/bin/cfssljson"
curl "https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64" -o "/usr/local/bin/cfssl-certinfo"

chmod +x "/usr/local/bin/cfssl"
chmod +x "/usr/local/bin/cfssljson"
chmod +x "/usr/local/bin/cfssl-certinfo"

初始化CA签名

cd /etc/kubernetes/ssl

# Generate CA Certificates
cat > ca-config.json <<-EOF
{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "kubernetes": {
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ],
                "expiry": "87600h"   # 过期时间,可以自行修改
            }
        }
    }
}
EOF

cat > ca-csr.json <<-EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "WuHan",
            "L": "WuHan",
            "O": "kubernetes",
            "OU": "CA"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json |cfssljson -bare ca

初始化SSL证书

需要生成证书的服务包括 etcd, docker, kube-apiserver, kube-controller-manager, kube-sheduler, kube-proxy, client

以下单独写一个函数用于生成各个证书

# 定义通用函数, 生成ssl证书,其中hosts中的allow_ips表示所有允许访问的IP地址,建议将集群所有的节点及允许访问的client节点的IP地址及主机名都加进去
function generate_ssl_certificates {
    if [[ "$#" -ne 3 ]]; then
        return 1
    fi

    local service_name="${1}"
    local common_name="${2}"
    local organization="${3}"
    local csr_file="${service_name}-csr.json"

    cd /etc/kubernetes/ssl

	cat > "${csr_file}" <<-EOF
	{
	    "CN": "CN",
	    "key": {
	        "algo": "rsa",
	        "size": 2048
	    },
	    "hosts": [
	        "10.10.0.1",
	        "10.10.0.2",
	        "127.0.0.1",
	        "kubernetes",
	        "kubernetes.default",
	        "kubernetes.default.svc",
	        "$ALLOW_IPS"
	    ],
	    "names": [
	        {
	            "C": "CN",
	            "ST": "WuHan",
	            "L": "WuHan",
	            "O": "${organization}",
	            "OU": "kubernetes"
	        }
	    ]
	}
	EOF

    cfssl gencert \
          -ca=ca.pem \
          -ca-key=ca-key.pem \
          -config=ca-config.json \
          -profile=kubernetes \
          "${csr_file}" |cfssljson -bare "${service_name}"
}

# generate the certificate and private key of each services
generate_ssl_certificates etcd etcd etcd
generate_ssl_certificates docker docker docker
generate_ssl_certificates kube-apiserver system:kube-apiserver system:kube-apiserver
generate_ssl_certificates kube-controller-manager system:kube-controller-manager system:kube-controller-manager
generate_ssl_certificates kube-scheduler system:kube-scheduler system:kube-scheduler

# notes: kube-proxy is different from other kubernetes components.
generate_ssl_certificates kube-proxy system:kube-proxy system:node-proxier

# generate the admin client certificate and private key.
generate_ssl_certificates admin admin system:masters

# the kube-controller-manager leverages a key pair to generate and sign service
# account tokens as describe in the managing service accounts documentation.
generate_ssl_certificates service-account service-accounts kubernetes

将生成的证书文件拷贝到集群其他节点

scp /etc/kubernetes/ssl/* k8s-node2:/etc/kubernetes/
scp /etc/kubernetes/ssl/* k8s-node3:/etc/kubernetes/

初始化kubelet的证书(所有的node节点执行)

cd /etc/kubernetes/ssl

cat > kubelet-$(hostname).json <<-EOF
{
    "CN": "system:node:$(hostname)",
    "hosts": [
        "$(hostname)",
        "${host}"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "WuHan",
            "L": "WuHan",
            "O": "system:nodes",
            "OU": "kubernetes"
        }
    ]
}
EOF

cfssl gencert \
      -ca=ca.pem \
      -ca-key=ca-key.pem \
      -config=ca-config.json \
      -profile=kubernetes \
      kubelet-$(hostname).json |cfssljson -bare kubelet-$(hostname)

二、安装

1. Etcd安装配置(master节点上执行)

安装etcd

yum install -y -q "etcd-3.3.11"

配置etcd

# 修改etcd.conf文件
vim /etc/etcd/etcd.conf

ETCD_NAME=etcd0
ETCD_DATA_DIR="/var/lib/etcd/etcd0"
ETCD_LISTEN_PEER_URLS="https://172.16.9.201:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.9.201:2379,https://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.9.201:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.9.201:2379"
ETCD_INITIAL_CLUSTER="etcd0=https://172.16.9.201:2380"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_AUTO_COMPACTION_RETENTION="1"

# 本次etcd是单机部署,如果是etcd集群的话则
ETCD_INITIAL_CLUSTER="etcd0=https://172.16.9.201:2380,etcd1=https://172.16.9.202:2380,etcd2=https://172.16.9.203:2380"

# 如果支持https的话,还需要配置如下
mkdir /etc/etcd/ssl
cp /etc/kubernetes/ssl/ca.pem /etc/etcd/ssl/
cp /etc/kubernetes/ssl/etcd.pem /etc/etcd/ssl/
cp /etc/kubernetes/ssl/etcd-key.pem /etc/etcd/ssl/

ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"

配置etcdctl

cat >> /root/.bashrc <<-EOF
alias etcdctlv2='ETCDCTL_API=2 etcdctl \
                               --endpoints=https://172.16.9.201:2379 \
                               --ca-file=/etc/etcd/ssl/ca.pem \
                               --cert-file=/etc/etcd/ssl/etcd.pem \
                               --key-file=/etc/etcd/ssl/etcd-key.pem'
alias etcdctlv3='ETCDCTL_API=3 etcdctl \
                               --endpoints=https://172.16.9.201:2379 \
                               --cacert=/etc/etcd/ssl/ca.pem \
                               --cert=/etc/etcd/ssl/etcd.pem \
                               --key=/etc/etcd/ssl/etcd-key.pem'
EOF

source ~/.bashrc

[root@k8s-node1 ~]# etcdctlv2 cluster-health
member cfc35c28cabf1d4e is healthy: got healthy result from https://172.16.9.201:2379
cluster is healthy

2. Flannel安装配置

使用二进制文件在物理机上部署(所有的Node节点)

wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz

tar -xzf flannel-v0.11.0-linux-amd64.tar.gz -C /tmp/flannel
cp /tmp/flannel/flanneld /usr/local/bin
cp /tmp/flannel/mk-docker-opts.sh/usr/local/bin

配置etcd(Master节点)

flannel_config=$(cat <<-EOF | python
import json
conf = dict()
conf['Network'] = '172.17.0.0/16'
conf['SubnetLen'] = 24
conf['Backend'] = {'Type': 'vxlan'}
print(json.dumps(conf))
EOF
)

etcdctlv2 set /k8s.com/network/config "${flannel_config}"

etcdctlv2 get /awcloud.com/network/config
{"Backend": {"Type": "vxlan"}, "Network": "172.17.0.0/16", "SubnetLen": 24}

配置flanneld(所有node节点)

cat > "/etc/systemd/system/flanneld.service" <<-EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
Before=docker.service

[Service]
Type=notify
ExecStart=/usr/local/bin/flanneld \
            -etcd-cafile=/etc/kubernetes/ssl/ca.pem \
            -etcd-certfile=/etc/kubernetes/ssl/etcd.pem \
            -etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem \
            -etcd-endpoints=https://172.16.9.201:2379 \
            -etcd-prefix=/k8s.com/network \
            -iface=eth0 \
            -ip-masq

ExecStartPost=/usr/local/bin/mk-docker-opts.sh \
            -k DOCKER_NETWORK_OPTIONS \
            -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
WantedBy=docker.service
EOF

systemctl enable flanneld.service
systemctl start flanneld.service

# 查看各个node节点的flannel.1网卡
[root@k8s-node1 ~]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.3.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::3001:e8ff:fed6:a1f6  prefixlen 64  scopeid 0x20<link>
        ether 32:01:e8:d6:a1:f6  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

[root@k8s-node2 ~]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.50.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::b050:9ff:fe40:63ca  prefixlen 64  scopeid 0x20<link>
        ether b2:50:09:40:63:ca  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 10 overruns 0  carrier 0  collisions 0

[root@k8s-node3 ~]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.86.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::a0f6:a6ff:fe7d:bb15  prefixlen 64  scopeid 0x20<link>
        ether a2:f6:a6:7d:bb:15  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 10 overruns 0  carrier 0  collisions 0

# 查看etcd
[root@k8s-node1 ~]# etcdctlv2 ls /awcloud.com/network/subnets
/awcloud.com/network/subnets/172.17.86.0-24
/awcloud.com/network/subnets/172.17.50.0-24
/awcloud.com/network/subnets/172.17.3.0-24

[root@k8s-node1 ~]# etcdctlv2 get /awcloud.com/network/subnets/172.17.86.0-24
{"PublicIP":"172.16.9.203","BackendType":"vxlan","BackendData":{"VtepMAC":"a2:f6:a6:7d:bb:15"}}

[root@k8s-node1 ~]# etcdctlv2 get /awcloud.com/network/subnets/172.17.50.0-24
{"PublicIP":"172.16.9.202","BackendType":"vxlan","BackendData":{"VtepMAC":"b2:50:09:40:63:ca"}}

[root@k8s-node1 ~]# etcdctlv2 get /awcloud.com/network/subnets/172.17.3.0-24
{"PublicIP":"172.16.9.201","BackendType":"vxlan","BackendData":{"VtepMAC":"32:01:e8:d6:a1:f6"}}

3. Docker安装配置

# 配置yum源,也可以下载rpm包
cat > /etc/yum.repos.d/docker-ce.repo <<-EOF
[docker-ce-stable]
name=Docker CE Stable Mirror Repository
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
EOF
 
yum install --enablerepo=docker-ce-stable -y docker-ce-18.06.1.ce
 
# 配置文件
cat > "/etc/systemd/system/docker.service" <<-EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/bin/dockerd \
            $DOCKER_NETWORK_OPTIONS \
            --data-root=/var/lib/docker \
            --host=tcp://172.16.9.202:2375 \
            --host=unix:///var/run/docker.sock \
            --insecure-registry=172.16.9.201:30050 \       # Harbor仓库
            --insecure-registry=k8s.gcr.io \
            --insecure-registry=quay.io \
            --ip-forward=true \
            --live-restore=true \
            --log-driver=json-file \
            --log-level=warn \
            --registry-mirror=https://registry.docker-cn.com \
            --selinux-enabled=false \
            --storage-driver=overlay2 \
            --tlscacert=/etc/kubernetes/ssl/ca.pem \
            --tlscert=/etc/kubernetes/ssl/docker.pem \
            --tlskey=/etc/kubernetes/ssl/docker-key.pem \
            --tlsverify

ExecReload=/bin/kill -s HUP $MAINPID
# need to reset the rule of iptables FORWARD chain to ACCEPT, because
# docker 1.13 changed the default iptables forwarding policy to DROP.
# https://github.com/moby/moby/pull/28257/files
# https://github.com/kubernetes/kubernetes/issues/40182
EnvironmentFile=-/run/flannel/docker
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
# TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF
 
systemctl enable docker
systemctl start docker

# 查看docker0网卡
[root@k8s-node1 ~]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.3.1  netmask 255.255.255.0  broadcast 172.17.3.255
        ether 02:42:7d:f2:e9:6d  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@k8s-node2 ~]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.50.1  netmask 255.255.255.0  broadcast 172.17.50.255
        ether 02:42:a0:33:76:a9  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@k8s-node3 ~]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.86.1  netmask 255.255.255.0  broadcast 172.17.86.255
        ether 02:42:ba:e3:f9:41  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

4. Kubernetes 安装配置

本次是通过kubernetes二进制文件安装,下载软件包(在一个节点下载拷贝到其他节点)

wget https://dl.k8s.io/v1.13.4/kubernetes-server-linux-amd64.tar.gz

tar xvzf kubernetes-server-linux-amd64.tar.gz

# 如下拷贝到所有master节点
cp -f kubernetes/server/bin/kubectl /usr/local/bin/
cp -f kubernetes/server/bin/kube-apiserver /usr/local/bin/
cp -f kubernetes/server/bin/kube-controller-manager /usr/local/bin/
cp -f kubernetes/server/bin/kube-scheduler /usr/local/bin/

# 如下拷贝到所有node节点
cp -f kubernetes/server/bin/kubectl /usr/local/bin/
cp -f kubernetes/server/bin/kubelet /usr/local/bin/
cp -f kubernetes/server/bin/kube-proxy /usr/local/bin/

初始化kubeconfig配置(master节点执行)

# 1.Generating the data encryption config and key
encryption_key=$(head -c 32 /dev/urandom |base64)

cat > "/etc/kubernetes/encryption-config.yaml" <<EOF
apiVersion: v1
kind: EncryptionConfig
resources:
- resources:
  - secrets
  providers:
  - aescbc:
      keys:
      - name: key1
        secret: ${encryption_key}
  - identity: {}
EOF

# 2.Generating the kubeconfig file for k8s component
for component in kube-controller-manager kube-scheduler kube-proxy; do
	kubectl config set-cluster kubernetes \
			--embed-certs=true \
			--certificate-authority="/etc/kubernetes/ca.pem" \
			--server=https://172.16.9.201:5443 \
			--kubeconfig="/etc/kubernetes/${component}.kubeconfig"
	
	kubectl config set-credentials "system:${component}" \
			--embed-certs=true \
			--client-certificate=/etc/kubernetes/ssl/${component}.pem \
			--client-key=/etc/kubernetes/ssl/${component}-key.pem \
			--kubeconfig="/etc/kubernetes/${component}.kubeconfig"
	
	kubectl config set-context default \
			--cluster=kubernetes \
			--user="system:${component}" \
			--kubeconfig="/etc/kubernetes/${component}.kubeconfig"
	
	kubectl config use-context default \
			--kubeconfig="/etc/kubernetes/${component}.kubeconfig"
done

# 3.Generating the kubeconfig file for user admin
kubectl config set-cluster kubernetes \
        --embed-certs=true \
        --certificate-authority="/etc/kubernetes/ssl/ca.pem" \
        --server=https://172.16.9.201:5443 \
        --kubeconfig="/etc/kubernetes/admin.kubeconfig"

kubectl config set-credentials admin \
        --embed-certs=true \
        --client-certificate="/etc/kubernetes/ssl/admin.pem" \
        --client-key="/etc/kubernetes/ssl/admin-key.pem" \
        --kubeconfig="/etc/kubernetes/admin.kubeconfig"

kubectl config set-context default \
        --cluster="${KUBE_CLUSTER_NAME}" \
        --user=admin \
        --kubeconfig="/etc/kubernetes/admin.kubeconfig"

kubectl config use-context default \
        --kubeconfig="/etc/kubernetes/admin.kubeconfig"

# 4. copy configfiles to all masters and nodes
scp /etc/kubernetes/*.kubeconfig other_nodes:/etc/kubernetes/
scp /etc/kubernetes/encryption-config.yaml other_nodes:/etc/kubernetes/

安装配置master

kube-apiserver

cat > "/etc/systemd/system/kube-apiserver.service" <<-EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
User=root
ExecStart=/usr/local/bin/kube-apiserver \
            --address=172.16.9.201 \
            --advertise-address=172.16.9.201 \
            --allow-privileged=true \
            --alsologtostderr=true \
            --apiserver-count=1 \
            --authorization-mode=Node,RBAC \
            --bind-address=172.16.9.201 \
            --client-ca-file=/etc/kubernetes/ssl/ca.pem \
            --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
            --enable-swagger-ui=true \
            --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
            --etcd-certfile=/etc/kubernetes/ssl/etcd.pem \
            --etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem \
            --etcd-prefix=/kubernetes \
            --etcd-servers=https://172.16.9.201:2379 \
            --event-ttl=1h \
            --experimental-encryption-provider-config=/etc/kubernetes/encryption-config.yaml \
            --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \
            --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
            --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
            --kubelet-https=true \
            --insecure-bind-address=172.16.9.201 \
            --insecure-port=7070 \
            --log-dir=/var/log/kubernetes \
            --log-flush-frequency=10s \
            --logtostderr=false \
            --runtime-config=api/all \
            --secure-port=5443 \
            --service-account-key-file=/etc/kubernetes/ssl/service-account.pem \
            --service-cluster-ip-range=10.10.0.0/16 \
            --service-node-port-range=30000-32767 \
            --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
            --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
            --v=4
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

kube-controller-manager

cat > "/etc/systemd/system/kube-controller-manager.service" <<-EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
User=root
ExecStart=/usr/local/bin/kube-controller-manager \
            --address=127.0.0.1 \
            --allocate-node-cidrs=false \
            --alsologtostderr=true \
            --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
            --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
            --bind-address=127.0.0.1 \
            --cluster-cidr=172.17.0.0/16 \
            --cluster-name=kubernetes \
            --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
            --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
            --controller-start-interval=0 \
            --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
            --leader-elect=true \
            --leader-elect-lease-duration=15s \
            --leader-elect-renew-deadline=10s \
            --leader-elect-retry-period=2s \
            --log-dir=/var/log/kubernetes \
            --log-flush-frequency=10s \
            --logtostderr=false \
            --node-cidr-mask-size=16 \
            --node-monitor-grace-period=30s \
            --node-monitor-period=3s \
            --pod-eviction-timeout=30s \
            --port=10252 \
            --root-ca-file=/etc/kubernetes/ssl/ca.pem \
            --secure-port=10257 \
            --service-account-private-key-file=/etc/kubernetes/ssl/service-account-key.pem \
            --service-cluster-ip-range=10.10.0.0/16 \
            --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
            --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
            --use-service-account-credentials=true \
            --v=4
Restart=on-failure
RestartSec=5
Type=simple
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

kube-scheduler

cat > "/etc/systemd/system/kube-scheduler.service" <<-EOF
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
User=root
ExecStart=/usr/local/bin/kube-scheduler \
            --address=127.0.0.1 \
            --alsologtostderr=true \
            --bind-address=127.0.0.1 \
            --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
            --leader-elect=true \
            --leader-elect-lease-duration=15s \
            --leader-elect-renew-deadline=10s \
            --leader-elect-retry-period=2s \
            --log-dir=/var/log/kubernetes \
            --log-flush-frequency=10s \
            --logtostderr=false \
            --port=10251 \
            --secure-port=10259 \
            --tls-cert-file=/etc/kubernetes/ssl/kube-scheduler.pem \
            --tls-private-key-file=/etc/kubernetes/ssl/kube-scheduler-key.pem \
            --v=4
Restart=on-failure
RestartSec=5
Type=simple
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

启动服务

for svc in kube-{apiserver,controller-manager,scheduler}.service; do
    systemctl enable ${svc}
    systemctl start ${svc}
done

export KUBECONFIG=/etc/kubernetes/admin.kubeconfig

[root@k8s-node1 ~]# kubectl get node
No resources found.
[root@k8s-node1 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

安装配置node

kube-proxy

cat > "/etc/systemd/system/kube-proxy.service" <<-EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
User=root
ExecStart=/usr/local/bin/kube-proxy \
            --alsologtostderr=true \
            --bind-address=172.16.9.201 \
            --cluster-cidr=172.17.0.0/16 \
            --hostname-override= \
            --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
            --log-dir=/var/log/kubernetes \
            --log-flush-frequency=5s \
            --logtostderr=false \
            --proxy-mode=iptables \
            --v=4
Restart=on-failure
RestartSec=5
Type=simple
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

kubelet

cat > "/etc/systemd/system/kubelet.service" <<-EOF
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
User=root
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
            --address=172.16.9.201 \
            --allow-privileged=true \
            --alsologtostderr=true \
            --client-ca-file=/etc/kubernetes/ssl/ca.pem \
            --cluster-dns=10.10.0.2 \
            --cluster-domain=k8s.local \
            --docker-tls \
            --docker-tls-ca=/etc/kubernetes/ssl/ca.pem \
            --docker-tls-cert=/etc/kubernetes/ssl/docker.pem \
            --docker-tls-key=/etc/kubernetes/ssl/docker-key.pem \
            --fail-swap-on=true \
            --healthz-port=10248 \
            --hostname-override= \
            --image-pull-progress-deadline=30m \
            --kubeconfig=/etc/kubernetes/kubelet-k8s-node1.kubeconfig \
            --log-dir=/var/log/kubernetes \
            --log-flush-frequency=5s \
            --logtostderr=false \
            --pod-infra-container-image=172.16.9.201:30050/kube-system/pause-amd64:3.1 \
            --port=10250 \
            --read-only-port=10255 \
            --register-node=true \
            --root-dir=/var/lib/kubelet \
            --runtime-request-timeout=10m \
            --serialize-image-pulls=false \
            --tls-cert-file=/etc/kubernetes/ssl/kubelet-k8s-node1.pem \
            --tls-private-key-file=/etc/kubernetes/ssl/kubelet-k8s-node1-key.pem \
            --v=4
Restart=on-failure
RestartSec=5
Type=simple
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

启动服务

for svc in {kube-proxy,kubelet}.service; do
    systemctl enable ${svc}
    systemctl start ${svc}
done

[root@k8s-node1 ~]# kubectl get node
NAME        STATUS   ROLES         AGE     VERSION
k8s-node1   Ready    master,node   3m12s   v1.13.4
k8s-node2   Ready    node          3m11s   v1.13.4
k8s-node3   Ready    node          3m10s   v1.13.4

安装coredns

coredns通过容器部署,需要先上传镜像

docker pull coredns/coredns:1.4.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1
# 重新打tag, 我们使用的是harbor,这个镜像地址在kubelet启动脚本中可以配置
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 172.16.9.201:30050/kube-system/pause-amd64:3.1

准备yaml文件

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes k8s.local in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    version: "1.4.0"
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 2
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: coredns
      version: "1.4.0"
  template:
    metadata:
      labels:
        k8s-app: coredns
        version: "1.4.0"
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      containers:
      - name: coredns
        image: coredns/coredns:1.4.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
      - name: config-volume
        configMap:
          name: coredns
          items:
          - key: Corefile
            path: Corefile

---
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: coredns
    version: "1.4.0"
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
    version: "1.4.0"
  clusterIP: 10.10.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

部署coredns

kubectl create -f coredns.yaml

[root@k8s-node1 ~]# kubectl --namespace=kube-system get pod
NAME                      READY   STATUS    RESTARTS   AGE
coredns-697bc57fb-49nmn   1/1     Running   0          28m
coredns-697bc57fb-hzlj8   1/1     Running   0          28m

部署dashboard

[root@localhost opt]# ./download_k8s_offline.sh ======================================== Kubernetes 离线部署包生成脚本 (v3.1) ======================================== 🔄 配置系统仓库... Repository epel is listed more than once in the configuration Docker CE Stable 9.6 kB/s | 3.5 kB 00:00 Kubernetes 5.3 kB/s | 1.4 kB 00:00 软件包 epel-release-8-22.el8.noarch 已安装。 依赖关系解决。 无需任何处理。 完毕! 🔄 创建 yum 缓存(不使用 fast 参数)... Repository epel is listed more than once in the configuration 38 文件已删除 Repository epel is listed more than once in the configuration CentOS-8 - BaseOS - mirrors.aliyun.com 4.2 MB/s | 4.6 MB 00:01 CentOS-8 - AppStream - mirrors.aliyun.com 6.0 MB/s | 8.4 MB 00:01 CentOS-8 - Extras - mirrors.aliyun.com 23 kB/s | 10 kB 00:00 Extra Packages for Enterprise Linux 8 - x86_64 7.1 MB/s | 14 MB 00:01 Docker CE Stable 145 kB/s | 79 kB 00:00 Kubernetes 297 kB/s | 182 kB 00:00 元数据缓存已建立。 ✅ 仓库配置完成 🛠️ 安装必要工具... Repository epel is listed more than once in the configuration 上次元数据过期检查:0:00:09 前,执行于 2025年07月25日 星期五 04时51分38秒。 软件包 yum-utils-4.0.21-3.el8.noarch 已安装。 软件包 createrepo_c-0.17.2-3.el8.x86_64 已安装。 软件包 wget-1.19.5-10.el8.x86_64 已安装。 软件包 curl-7.61.1-22.el8.x86_64 已安装。 软件包 jq-1.5-12.el8.x86_64 已安装。 依赖关系解决。 无需任何处理。 完毕! ✅ 工具安装完成 🔍 检测可用 Kubernetes 版本... 📋 可用 Kubernetes 版本: 1) 1.33.3 2) 1.33.2 3) 1.33.1 4) 1.33.0 5) 1.32.7 6) 1.32.6 7) 1.32.5 8) 1.32.4 9) 1.32.3 10) 1.32.2 11) 1.28.2-0 12) 1.28.1-0 13) 1.28.0-0 14) 1.27.6-0 15) 1.27.5-0 16) 1.27.4-0 17) 1.27.3-0 18) 1.27.2-0 19) 1.27.1-0 20) 1.27.0-0 21) 1.26.9-0 22) 1.26.8-0 23) 1.26.7-0 24) 1.26.6-0 25) 1.26.5-0 26) 1.26.4-0 27) 1.26.3-0 28) 1.26.2-0 29) 1.26.1-0 30) 1.26.0-0 31) 1.25.14-0 32) 1.25.13-0 33) 1.25.12-0 34) 1.25.11-0 35) 1.25.10-0 36) 1.25.9-0 37) 1.25.8-0 38) 1.25.7-0 39) 1.25.6-0 40) 1.25.5-0 41) 1.25.4-0 42) 1.25.3-0 43) 1.25.2-0 44) 1.25.1-0 45) 1.25.0-0 46) 1.24.17-0 47) 1.24.16-0 48) 1.24.15-0 49) 1.24.14-0 50) 1.24.13-0 51) 1.24.12-0 52) 1.24.11-0 53) 1.24.10-0 54) 1.24.9-0 55) 1.24.8-0 56) 1.24.7-0 57) 1.24.6-0 58) 1.24.5-0 59) 1.24.4-0 60) 1.24.3-0 61) 1.24.2-0 62) 1.24.1-0 63) 1.24.0-0 64) 1.23.17-0 65) 1.23.16-0 66) 1.23.15-0 67) 1.23.14-0 68) 1.23.13-0 69) 1.23.12-0 70) 1.23.11-0 71) 1.23.10-0 72) 1.23.9-0 73) 1.23.8-0 74) 1.23.7-0 75) 1.23.6-0 76) 1.23.5-0 77) 1.23.4-0 78) 1.23.3-0 79) 1.23.2-0 80) 1.23.1-0 81) 1.23.0-0 82) 1.22.17-0 83) 1.22.16-0 84) 1.22.15-0 85) 1.22.14-0 86) 1.22.13-0 87) 1.22.12-0 88) 1.22.11-0 89) 1.22.10-0 90) 1.22.9-0 91) 1.22.8-0 92) 1.22.7-0 93) 1.22.6-0 94) 1.22.5-0 95) 1.22.4-0 96) 1.22.3-0 97) 1.22.2-0 98) 1.22.1-0 99) 1.22.0-0 100) 1.21.14-0 101) 1.21.13-0 102) 1.21.12-0 103) 1.21.11-0 104) 1.21.10-0 105) 1.21.9-0 106) 1.21.8-0 107) 1.21.7-0 108) 1.21.6-0 109) 1.21.5-0 110) 1.21.4-0 111) 1.21.3-0 112) 1.21.2-0 113) 1.21.1-0 114) 1.21.0-0 115) 1.20.15-0 116) 1.20.14-0 117) 1.20.13-0 118) 1.20.12-0 119) 1.20.11-0 120) 1.20.10-0 121) 1.20.9-0 122) 1.20.8-0 123) 1.20.7-0 124) 1.20.6-0 125) 1.20.5-0 126) 1.20.4-0 127) 1.20.2-0 128) 1.20.1-0 129) 1.20.0-0 130) 1.19.16-0 131) 1.19.15-0 132) 1.19.14-0 133) 1.19.13-0 134) 1.19.12-0 135) 1.19.11-0 136) 1.19.10-0 137) 1.19.9-0 138) 1.19.8-0 139) 1.19.7-0 140) 1.19.6-0 141) 1.19.5-0 142) 1.19.4-0 143) 1.19.3-0 144) 1.19.2-0 145) 1.19.1-0 146) 1.19.0-0 147) 1.18.20-0 148) 1.18.19-0 149) 1.18.18-0 150) 1.18.17-0 151) 1.18.16-0 152) 1.18.15-0 153) 1.18.14-0 154) 1.18.13-0 155) 1.18.12-0 156) 1.18.10-0 157) 1.18.9-0 158) 1.18.8-0 159) 1.18.6-0 160) 1.18.5-0 161) 1.18.4-1 162) 1.18.4-0 163) 1.18.3-0 164) 1.18.2-0 165) 1.18.1-0 166) 1.18.0-0 167) 1.17.17-0 168) 1.17.16-0 169) 1.17.15-0 170) 1.17.14-0 171) 1.17.13-0 172) 1.17.12-0 173) 1.17.11-0 174) 1.17.9-0 175) 1.17.8-0 176) 1.17.7-1 177) 1.17.7-0 178) 1.17.6-0 179) 1.17.5-0 180) 1.17.4-0 181) 1.17.3-0 182) 1.17.2-0 183) 1.17.1-0 184) 1.17.0-0 185) 1.16.15-0 186) 1.16.14-0 187) 1.16.13-0 188) 1.16.12-0 189) 1.16.11-1 190) 1.16.11-0 191) 1.16.10-0 192) 1.16.9-0 193) 1.16.8-0 194) 1.16.7-0 195) 1.16.6-0 196) 1.16.5-0 197) 1.16.4-0 198) 1.16.3-0 199) 1.16.2-0 200) 1.16.1-0 201) 1.16.0-0 202) 1.15.12-0 203) 1.15.11-0 204) 1.15.10-0 205) 1.15.9-0 206) 1.15.8-0 207) 1.15.7-0 208) 1.15.6-0 209) 1.15.5-0 210) 1.15.4-0 211) 1.15.3-0 212) 1.15.2-0 213) 1.15.1-0 214) 1.15.0-0 215) 1.14.10-0 216) 1.14.9-0 217) 1.14.8-0 218) 1.14.7-0 219) 1.14.6-0 220) 1.14.5-0 221) 1.14.4-0 222) 1.14.3-0 223) 1.14.2-0 224) 1.14.1-0 225) 1.14.0-0 226) 1.13.12-0 227) 1.13.11-0 228) 1.13.10-0 229) 1.13.9-0 230) 1.13.8-0 231) 1.13.7-0 232) 1.13.6-0 233) 1.13.5-0 234) 1.13.4-0 235) 1.13.3-0 236) 1.13.2-0 237) 1.13.1-0 238) 1.13.0-0 239) 1.12.10-0 240) 1.12.9-0 241) 1.12.8-0 242) 1.12.7-0 243) 1.12.6-0 244) 1.12.5-0 245) 1.12.4-0 246) 1.12.3-0 247) 1.12.2-0 248) 1.12.1-0 249) 1.12.0-0 250) 1.11.10-0 251) 1.11.9-0 252) 1.11.8-0 253) 1.11.7-0 254) 1.11.6-0 255) 1.11.5-0 256) 1.11.4-0 257) 1.11.3-0 258) 1.11.2-0 259) 1.11.1-0 260) 1.11.0-0 261) 1.10.13-0 262) 1.10.12-0 263) 1.10.11-0 264) 1.10.10-0 265) 1.10.9-0 266) 1.10.8-0 267) 1.10.7-0 268) 1.10.6-0 269) 1.10.5-0 270) 1.10.4-0 271) 1.10.3-0 272) 1.10.2-0 273) 1.10.1-0 274) 1.10.0-0 275) 1.9.11-0 276) 1.9.10-0 277) 1.9.9-0 278) 1.9.8-0 279) 1.9.7-0 280) 1.9.6-0 281) 1.9.5-0 282) 1.9.4-0 283) 1.9.3-0 284) 1.9.2-0 285) 1.9.1-0 286) 1.9.0-0 287) 1.8.15-0 288) 1.8.14-0 289) 1.8.13-0 290) 1.8.12-0 291) 1.8.11-0 292) 1.8.10-0 293) 1.8.9-0 294) 1.8.8-0 295) 1.8.7-0 296) 1.8.6-0 297) 1.8.5-1 298) 1.8.5-0 299) 1.8.4-1 300) 1.8.4-0 301) 1.8.3-1 302) 1.8.3-0 303) 1.8.2-1 304) 1.8.2-0 305) 1.8.1-1 306) 1.8.1-0 307) 1.8.0-1 308) 1.8.0-0 309) 1.7.16-0 310) 1.7.15-0 311) 1.7.14-0 312) 1.7.11-1 313) 1.7.11-0 314) 1.7.10-1 315) 1.7.10-0 316) 1.7.9-1 317) 1.7.9-0 318) 1.7.8-2 319) 1.7.8-1 320) 1.7.7-2 321) 1.7.7-1 322) 1.7.6-2 323) 1.7.6-1 324) 1.7.5-1 325) 1.7.5-0 326) 1.7.4-1 327) 1.7.4-0 328) 1.7.3-2 329) 1.7.3-1 330) 1.7.2-1 331) 1.7.2-0 332) 1.7.1-1 333) 1.7.1-0 334) 1.7.0-1 335) 1.7.0-0 336) 1.6.13-1 337) 1.6.13-0 338) 1.6.12-1 339) 1.6.12-0 340) 1.6.11-1 341) 1.6.11-0 342) 1.6.10-1 343) 1.6.10-0 344) 1.6.9-1 345) 1.6.9-0 346) 1.6.8-1 347) 1.6.8-0 348) 1.6.7-1 349) 1.6.7-0 350) 1.6.6-1 351) 1.6.6-0 352) 1.6.5-1 353) 1.6.5-0 354) 1.6.4-1 355) 1.6.4-0 356) 1.6.3-1 357) 1.6.3-0 358) 1.6.2-1 359) 1.6.2-0 360) 1.6.1-1 361) 1.6.1-0 362) 1.6.0-1 363) 1.6.0-0 364) 1.5.4-1 365) 1.5.4-0 👉 请选择版本号 [1-365, 默认1]: 11 ✅ 已选择版本: Kubernetes v1.28.2 📦 下载Kubernetes组件... ⬇️ 下载: kubelet-1.28.2-0 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.el7.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.el7.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.el7.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.x86_64.rpm 🔄 使用yumdownloader下载: kubelet Repository epel is listed more than once in the configuration 错误:未知仓库:'k8s-*' mv: '/opt/k8s-offline/packages/kubelet-1.28.2-0.rpm' 与'/opt/k8s-offline/packages/kubelet-1.28.2-0.rpm' 为同一文件 ✅ yumdownloader下载成功: kubelet-1.28.2-0.rpm ⬇️ 下载: kubeadm-1.28.2-0 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.el7.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.el7.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.el7.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.x86_64.rpm 🔄 使用yumdownloader下载: kubeadm Repository epel is listed more than once in the configuration 错误:未知仓库:'k8s-*' mv: '/opt/k8s-offline/packages/kubeadm-1.28.2-0.rpm' 与'/opt/k8s-offline/packages/kubeadm-1.28.2-0.rpm' 为同一文件 ✅ yumdownloader下载成功: kubeadm-1.28.2-0.rpm ⬇️ 下载: kubectl-1.28.2-0 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.el7.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.el7.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.el7.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.x86_64.rpm 🔄 使用yumdownloader下载: kubectl Repository epel is listed more than once in the configuration 错误:未知仓库:'k8s-*' mv: '/opt/k8s-offline/packages/kubectl-1.28.2-0.rpm' 与'/opt/k8s-offline/packages/kubectl-1.28.2-0.rpm' 为同一文件 ✅ yumdownloader下载成功: kubectl-1.28.2-0.rpm ⬇️ 下载: cri-tools-1.26.0 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.el7.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.el7.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.el8.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.el7.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.el7.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.el8.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.el7.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.el7.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.el8.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.x86_64.rpm 🔄 使用yumdownloader下载: cri-tools Repository epel is listed more than once in the configuration 错误:未知仓库:'k8s-*' mv: '/opt/k8s-offline/packages/cri-tools-1.26.0.rpm' 与'/opt/k8s-offline/packages/cri-tools-1.26.0.rpm' 为同一文件 ✅ yumdownloader下载成功: cri-tools-1.26.0.rpm ✅ Kubernetes组件下载完成 🐳 下载Docker组件... 根据日志重新生成所有脚本
07-26
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值