Kubernetes_deployment

Kubernetes版本(打完补丁的稳定版)选择建议可参考链接:官方k8s版本
本文阐述了三种部署Kubernetes集群方式:

  • kubeadm部署
  • 二进制部署
  • kubekey部署

Kubernetes部署前系统配置

#在所有节点配置
1.安装前准备
Rocky9.4*5 4核8G内存
master*3+node*2
禁止划分swap
ip分配
192.168.122.226-230

2.所有节点间相互信任,配置免密登录
#单台节点配置
cat  >> /etc/hosts  <<EOF
192.168.122.226 r-k8s-m1
192.168.122.227 r-k8s-m2
192.168.122.228 r-k8s-m3
192.168.122.229 r-k8s-n1
192.168.122.230 r-k8s-n2
EOF
#该节点批量传输hosts文件
for i in {227..230};do scp /etc/hosts 192.168.122.$i:/etc/;done
#每台节点需要登录手动生成ssh密钥
ssh-keygen -t rsa -b 4096
#每台节点需要登录进行公钥传输建立免密关系
cat  /root/.ssh/id_*.pub | ssh user@ip 'cat  >> /root/.ssh/authorized_keys'

3.关闭不需要的系统服务
#根据不同系统关闭(openEuler、rocky、almalinux需要NetworkManager服务,不用关闭该服务)
systemctl disable --now firewalld
systemctl disable --now NetworkManager
systemctl disable --now postfix
sed -i.bak 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config && setenforce 0
#批量关闭不需要的服务
for i in {226..230};do ssh 192.168.122.$i "systemctl disable --now firewalld && systemctl disable --now NetworkManager && systemctl disable --now postfix";done
for i in {226..230};do ssh 192.168.122.$i "sed -i.bak 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config && setenforce 0";done


4.所有节点同步时间
ntpdate同步
#选择任意一台节点安装时间服务,和外网时间同步
yum -y install ntp ntpdate
systemctl enable --now ntpd
#其他节点和该时间节点进行时间同步
*/5 * * * * /sbin/ntpdate [ip] > /dev/null 2>&1;/sbin/hwclock -w
-------------------------------------------------------------------------------------------------------------
chrony同步
#或者所有节点安装chrony服务
yum -y install chrony 
systemctl enable --now chronyd
sed -Ei.bak 's/.*iburst.*/# &/g' /etc/chrony.conf 
echo "server 192.168.122.78 iburst" >> /etc/chrony.conf
systemctl restart chronyd
chronyc sources -v
#可选择所有节点一键配置
for i in {226..230};do ssh 192.168.122.$i "yum -y install chrony";done
for i in {226..230};do ssh 192.168.122.$i "systemctl enable --now chronyd";done
for i in {226..230};do ssh 192.168.122.$i "sed -Ei.bak 's/.*iburst.*/# &/g' /etc/chrony.conf";done
for i in {226..230};do ssh 192.168.122.$i "echo "server 192.168.122.78 iburst" >> /etc/chrony.conf";done
for i in {226..230};do ssh 192.168.122.$i "systemctl restart chronyd";done
for i in {226..230};do ssh 192.168.122.$i "chronyc sources -v";done
for i in {226..230};do ssh 192.168.122.$i "date";done

5.关闭swap
swapoff -a
sed -Ei.bak 's/.*swap*./#&/g' /etc/fstab
#可选择所有节点一键配置
for i in {226..230};do ssh 192.168.122.$i "swapoff -a";done
for i in {226..230};do ssh 192.168.122.$i 'sed -Ei.bak "s/.*swap*./#&/g" /etc/fstab';done
for i in {226..230};do ssh 192.168.122.$i "cat /etc/fstab && free -h";done


6.加载需要的内核模块
#安装ipset、ipvsadm
yum -y install ipset ipvsadm
for i in {226..230};do ssh 192.168.122.$i "yum -y install ipset ipvsadm";done
#ipvs模块
cat > /etc/modules-load.d/ipvs.conf <<EOF
#Load IPVS at boot
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF
#overlay和br_netfilter模块
cat > /etc/modules-load.d/containerd.conf << EOF
overlay
br_netfilter
EOF
#批量运行
for i in {227..230};do scp /etc/modules-load.d/ipvs.conf 192.168.122.$i:/etc/modules-load.d/;done
for i in {227..230};do scp /etc/modules-load.d/containerd.conf 192.168.122.$i:/etc/modules-load.d/;done
#手动生效并验证
for i in {226..230};do ssh 192.168.122.$i "systemctl enable --now systemd-modules-load.service";done
for i in {226..230};do ssh 192.168.122.$i "systemctl status systemd-modules-load.service";done
#确认内核模块加载成功
for i in {226..230};do ssh 192.168.122.$i "lsmod | grep -e ipvs -e nf_conntrack";done
for i in {226..230};do ssh 192.168.122.$i "lsmod | grep -e br -e overlay";done

7.优化系统
cat >>/etc/security/limits.conf  << EOF
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF
cat << EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
vm.swappiness=0
EOF
sysctl --system
#批量运行
for i in {227..230};do scp /etc/security/limits.conf 192.168.122.$i:/etc/security/;done
for i in {227..230};do scp /etc/sysctl.d/k8s.conf 192.168.122.$i:/etc/sysctl.d/;done
for i in {226..230};do ssh 192.168.122.$i "sysctl --system";done

#可选
8.升级内核
rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
rpm -import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
yum -y --enablerepo=elrepo-kernel install kernel-ml.x86_64 kernel-ml-devel.x86_64
awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
grub2-set-default 0
reboot



Kubeadm初始化k8s的时候,所有组件都是以Pod形式运行的,具备故障自恢复能力。
kubeadm是工具,自动化部署,简化操作屏蔽细节,使得对模块感知较少,适合快速部署或者自动化要求较高场景使用。

k8sv1.24后版本弃用dockershim,即不直接兼容docker,默认containerd,可使用非官方维护的cri-dockerd适配器将docker engine与k8s集成。

docker作为容器运行时:kubelet > dockershim(kubelet进程中)> dockerd > containerd
containerd作为容器运行时:kubelet > cri-plugin(containerd进程中)> containerd

Kubeadm部署Kubernetes

kubernetest在v1.24版本后推荐使用containerd作运行时

Docker作为Runtime

1.所有节点配置kubernetes源
#可指定版本
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/repodata/repomd.xml.key
EOF
#可指定版本
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/repodata/repomd.xml.key
EOF
yum clean all 
yum makecache fast -y
for i in {227..230};do scp /etc/yum.repos.d/kubernetes.repo 192.168.122.$i:/etc/yum.repos.d/;done
for i in {226..230};do ssh 192.168.122.$i "yum repolist";done 

2.安装依赖的工具包
mkdir -p /opt/k8s_packages
yum list kubelet --showduplicates | sort -r
yum list kubectl --showduplicates | sort -r
yum list kubeadm --showduplicates | sort -r
yum list cri-tools --showduplicates | sort -r
yum list kubernetes-cni --showduplicates | sort -r
yum install -y --downloadonly --downloaddir /opt/k8s_packages kubelet-1.32.0 kubeadm-1.32.0 kubectl-1.32.0 cri-tools-1.32.0 kubernetes-cni-1.6.0
cd /opt/k8s_packages && rpm -ivh *.rpm
#由于官网未开放同步方式, 可能会有索引gpg检查失败的情况
yum install -y --nogpgcheck kubelet kubeadm kubectl


3.docker部署
#配置Docker源
yum config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#安装docker的相关插件-方法一
yum -y install docker-ce docker-ce-cli docker-compose-plugin
#安装docker的相关插件-方法二
mkdir -p /opt/k8s/docker
yum list docker-ce --showduplicate 		#查询相关版本
yum list docker-ce-cli --showduplicate 
yum list docker-compose-plugin --showduplicate  
yum install --downloadonly --downloaddir /opt/k8s/docker/ docker-ce docker-ce-cli docker-compose-plugin
cd /opt/k8s/docker && rpm -ivh *.rpm

#修改Docker配置文件
#docker初始化配置
######
#data-root #存储容器数据目录
#exec-opts #Cgroup驱动配置
#registry-mirrors #镜像拉取的加速地址
mkdir -p /etc/docker /opt/docker
cat >> /etc/docker/daemon.json << EOF
{
    "data-root":"/opt/docker/docker_data",
    "exec-opts": ["native.cgroupdriver=systemd"],
    "registry-mirrors": [
                "https://atomhub.openatom.cn",
                "https://docker.m.daocloud.io"
            ]
}
EOF

systemctl daemon-reload
systemctl restart docker
systemctl enable --now docker
docker version
docker info | grep -e "Registry Mirrors" -A 2 -e "Docker Root Dir" -e "Cgroup Driver"

4.go安装
#下载二进制包
wget https://golang.google.cn/dl/go1.22.5.linux-amd64.tar.gz
#解压到目录
tar -C /usr/local/ -xzvf go1.22.5.linux-amd64.tar.gz 
#添加环境变量
cat >> /etc/profile <<EOF
#golang
export GOROOT=/usr/local/go
export GOPATH=/usr/local/gopath
export PATH=$PATH:$GOROOT/bin
EOF
#环境变量生效
source /etc/profile

5.#cri-dockerd安装
#拉取cri-dockerd源码包
git clone https://github.com/Mirantis/cri-dockerd.git
#编译
cd cri-dockerd
mkdir bin
go build -o bin/cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
cp -a packaging/systemd/* /etc/systemd/system
vim /usr/lib/systemd/system/cri-docker.service
ExecStart=/usr/local/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9

#配置连接crictl
echo "runtime-endpoint: unix:///var/run/cri-dockerd.sock" >> /etc/crictl.yaml
crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps

#配置连接kubelet
cat >> /var/lib/kubelet/kubeadm-flags.env <<EOF
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9"
EOF

#拷贝配置文件给其他节点或者按照以上记录进行配置
scp /usr/lib/systemd/system/cri-docker.service /usr/lib/systemd/system/cri-docker.socket [k8s-node]:/usr/lib/systemd/system/
scp /etc/docker/daemon.json [k8s-node]:/etc/docker/
scp /etc/profile [k8s-node]:/etc/
scp /var/lib/kubelet/kubeadm-flags.env [k8s-node]:/var/lib/kubelet

#docker下载镜像
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.30.2
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.30.2
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.30.2
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.30.2
docker pull registry.aliyuncs.com/google_containers/coredns:v1.11.1
docker pull registry.aliyuncs.com/google_containers/pause:3.9
docker pull registry.aliyuncs.com/google_containers/etcd:3.5.12-0

#docker导出镜像文件另存
docker save -o kube-apiserver:v1.30.2.tar registry.aliyuncs.com/google_containers/kube-apiserver
docker save -o kube-controller-manager:v1.30.2.tar registry.aliyuncs.com/google_containers/kube-controller-manager
docker save -o kube-scheduler:v1.30.2.tar registry.aliyuncs.com/google_containers/kube-scheduler
docker save -o coredns:v1.11.1.tar registry.aliyuncs.com/google_containers/coredns
docker save -o kube-proxy:v1.30.2.tar registry.aliyuncs.com/google_containers/kube-proxy 
docker save -o pause:3.9.tar registry.aliyuncs.com/google_containers/pause
docker save -o etcd:3.5.12-0.tar registry.aliyuncs.com/google_containers/etcd

#docker导入镜像文件
docker load -i kube-apiserver:v1.30.2.tar
docker load -i kube-controller-manager:v1.30.2.tar
docker load -i kube-scheduler:v1.30.2.tar
docker load -i kube-proxy:v1.30.2.tar
docker load -i coredns:v1.11.1.tar
docker load -i pause:3.9.tar
docker load -i etcd:3.5.12-0.tar

ctr是二进制程序文件,containerd的一个客户端工具,不走containerd的加速器配置,默认不读取/etc/containerd/config.toml配置文件,需要在命令中指定。
crictl默认读取/etc/containerd/config.toml配置文件,走containerd的加速器配置。crictl是CRI(规范)兼容的容器运行时命令行接口,可以使用来检查和调试k8s工作节点上的容器运行时和应用程序。

Containerd作为Runtime

1.所有节点配置kubernetes源
#可指定版本
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/repodata/repomd.xml.key
EOF
#可指定版本
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/repodata/repomd.xml.key
EOF
yum clean all 
yum makecache fast -y
for i in {227..230};do scp /etc/yum.repos.d/kubernetes.repo 192.168.122.$i:/etc/yum.repos.d/;done
for i in {226..230};do ssh 192.168.122.$i "yum repolist";done 

2.安装依赖的工具包
mkdir -p /opt/k8s_packages
yum list kubelet --showduplicates | sort -r
yum list kubectl --showduplicates | sort -r
yum list kubeadm --showduplicates | sort -r
yum list cri-tools --showduplicates | sort -r
yum list kubernetes-cni --showduplicates | sort -r
for i in {226..230};do ssh 192.168.122.$i 'mkdir -p /opt/k8s_packages';done
#下载离线rpm包到指定目录,其中还会下载一些依赖库
yum install --downloadonly --downloaddir /opt/k8s_packages kubelet-1.31.12 kubeadm-1.31.12 kubectl-1.31.12 cri-tools-1.31.1 kubernetes-cni-1.5.1
for i in {227..230};do scp -r /opt/k8s_packages/ 192.168.122.$i:/opt/;done
for i in {226..230};do ssh 192.168.122.$i 'rpm -ivh /opt/k8s_packages/*.rpm';done
#由于官网未开放同步方式, 可能会有索引gpg检查失败的情况
yum install -y --nogpgcheck kubelet kubeadm kubectl

3.containerd部署
#安装containerd的相关插件(yum安装,任选其一即可)
#配置containerd源
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
for i in {227..230};do scp -r /etc/yum.repos.d/docker-ce.repo 192.168.122.$i:/etc/yum.repos.d/;done
yum list containerd.io --showduplicate 
yum install -y --downloadonly --downloaddir /opt/k8s_packages/ containerd.io
for i in {227..230};do scp -r /opt/k8s_packages/ 192.168.122.$i:/opt/k8s_packages/;done
for i in {226..230};do ssh 192.168.122.$i 'rpm -ivh /opt/k8s_packages/*.rpm';done

#安装containerd的相关插件(二进制,任选其一即可)
cd /opt/k8s_packages/
wget https://hub.gitmirror.com/https://github.com/containerd/containerd/releases/download/v1.7.9/cri-containerd-cni-1.7.9-linux-amd64.tar.gz
for i in {227..230};do scp -r /opt/k8s_packages/cri-containerd-cni-1.7.9-linux-amd64.tar.gz 192.168.122.$i:/opt/k8s_packages/;done
for i in {227..230};do ssh 192.168.122.$i "cd /opt/k8s_packages && tar -zxvf cri-containerd-cni-1.7.9-linux-amd64.tar.gz -C /";done
#生成默认配置文件
for i in {226..230};do ssh 192.168.122.$i "mkdir -p /etc/containerd";done
containerd config default | tee /etc/containerd/config.toml
#containerd初始化配置
vim /etc/containerd/config.toml
root = "/opt/containerd/containerd_data"	#配置容器存储数据目录
SystemdCgroup = true						#使用容器Cgroup驱动
config_path = "/etc/containerd/certs.d"		#加速地址文件所在目录
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"		#修改镜像地址和版本,版本得根据命令kubeadm config images list --kubernetes-version=v1.32.0获得
#拷贝给所有节点
for i in {227..230};do scp -r /etc/containerd/config.toml 192.168.122.$i:/etc/containerd/;done	

#配置加速地址(可选)
#harbor.example.net是内网的harbor私有仓库
#需要配置域名解析
echo "192.168.122.220 harbor.example.net" >> /etc/hosts
for i in {227..230};do scp -r /etc/hosts 192.168.122.$i:/etc/;done
#docker hub镜像加速
mkdir -p /etc/containerd/certs.d/docker.io
cat <<'EOF' | sudo tee /etc/containerd/certs.d/docker.io/hosts.toml 
server = "https://docker.io"
[host."https://dockerproxy.com"]
  capabilities = ["pull", "resolve"]

[host."https://docker.m.daocloud.io"]
  capabilities = ["pull", "resolve"]
EOF

#registry.k8s.io镜像加速
mkdir -p /etc/containerd/certs.d/registry.k8s.io
tee /etc/containerd/certs.d/registry.k8s.io/hosts.toml << 'EOF'
server = "https://registry.k8s.io"
[host."https://k8s.m.daocloud.io"]
  capabilities = ["pull", "resolve", "push"]
#私有仓库
[host."https://harbor.example.net"]
  capabilities = ["pull","resolve","push"]
  skip_verify = true
EOF

for i in {227..230};do scp -r /etc/containerd/certs.d/* 192.168.122.$i:/etc/containerd/;done

#启动containerd服务
for i in {226..230};do ssh 192.168.122.$i 'systemctl daemon-reload && systemctl enable --now containerd';done
for i in {226..230};do ssh 192.168.122.$i 'ctr version';done
#验证加速地址可用(需要登录harbor私有镜像仓库验证)
ctr -n k8s.io image pull --hosts-dir "/etc/containerd/certs.d" docker.io/library/nginx:latest


#调用crictl接口测试
crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
for i in {227..230};do ssh 192.168.122.$i 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps';done
for i in {227..230};do scp -r /etc/crictl.yaml 192.168.122.$i:/etc/;done
for i in {226..230};do ssh 192.168.122.$i 'systemctl restart containerd';done


#配置Kubelet使用Containerd
cat >> /var/lib/kubelet/kubeadm-flags.env <<EOF
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.10"
EOF
for i in {227..230};do scp -r /var/lib/kubelet/kubeadm-flags.env 192.168.122.$i:/var/lib/kubelet;done

#重载
for i in {226..230};do ssh 192.168.122.$i 'systemctl daemon-reload && systemctl restart containerd';done

#检验
crictl images
crictl ps
crictl rmi images
crictl rm container

4.所有节点开启kubelet服务
for i in {226..230};do ssh 192.168.122.$i 'systemctl enable --now kubelet';done

5.确认依赖镜像
kubeadm config images list --kubernetes-version=v1.32.0

6.准备依赖镜像
ctr管理
#创建命名空间
ctr namespace create k8s.io
for i in {227..230};do ssh 192.168.122.$i "ctr namespace create k8s.io";done
for i in {226..230};do ssh 192.168.122.$i 'ctr namespace ls';done

#拉取镜像
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.32.0
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.32.0
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.32.0
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-proxy:v1.32.0
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/etcd:3.5.16-0
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/coredns:v1.11.3
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/pause:3.10

#查看命名空间
ctr ns ls
#查看镜像
ctr -n k8s.io i ls -q
#导出镜像
mkdir /opt/k8s_images
cd !$
ctr -n k8s.io i export kube-apiserver:v1.32.0.tar  registry.aliyuncs.com/google_containers/kube-apiserver:v1.32.0
ctr -n k8s.io i export kube-controller-manager:v1.32.0.tar  registry.aliyuncs.com/google_containers/kube-controller-manager:v1.32.0
ctr -n k8s.io i export kube-scheduler:v1.32.0.tar  registry.aliyuncs.com/google_containers/kube-scheduler:v1.32.0
ctr -n k8s.io i export kube-proxy:v1.32.0.tar  registry.aliyuncs.com/google_containers/kube-proxy:v1.32.0 
ctr -n k8s.io i export etcd:3.5.16-0.tar  registry.aliyuncs.com/google_containers/etcd:3.5.16-0 
ctr -n k8s.io i export coredns:v1.11.3.tar  registry.aliyuncs.com/google_containers/coredns:v1.11.3 
ctr -n k8s.io i export pause:3.10.tar  registry.aliyuncs.com/google_containers/pause:3.10
for i in {227..230};do scp -r /opt/k8s_images/ 192.168.122.$i:/opt/k8s_images/;done


#导入镜像
ctr -n k8s.io image import kube-apiserver:v1.32.0.tar
ctr -n k8s.io image import kube-controller-manager:v1.32.0.tar
ctr -n k8s.io image import kube-scheduler:v1.32.0.tar
ctr -n k8s.io image import kube-proxy:v1.32.0.tar
ctr -n k8s.io image import coredns:v1.11.3.tar
ctr -n k8s.io image import pause:3.10.tar
ctr -n k8s.io image import etcd:3.5.16-0.tar
for i in {227..230};do ssh 192.168.122.$i 'cd /opt/k8s_images && ls | xargs -i ctr -n k8s.io image import "{}"';done
for i in {226..230};do ssh 192.168.122.$i 'ctr -n k8s.io images ls -q | wc -l';done

#删除镜像(不需要的镜像可以删除,仅作参考)
ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/kube-apiserver:v1.30.2


7.初始化集群
kubeadm init --apiserver-advertise-address=192.168.122.226 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.32.0 --service-cidr=10.98.0.0/16 --pod-network-cidr=10.246.0.0/16 --token-ttl=0 --upload-certs 

8.根据提示拷贝配置文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile

9.将工作节点加入到k8s集群
kubeadm join 192.168.122.70:6443 --token a8cuiy.qdia1l6zu4n4p09b --discovery-token-ca-cert-hash sha256:e9a528e6dfca81f194a4e43bfd215148d3f08786e689ed50511ea87349f903a8
#忘记加入集群指令可重新生成
kubeadm token create --print-join-command
#给工作节点设置标签
kubectl label nodes k8s-node1 kubernetes.io/role=worker

10.将备控节点加入到k8s集群
#查询ConfigMap资源的详细信息和状态
kubectl describe configmap kubeadm-config -n kube-system
#修改配置文件
kubectl edit configmap kubeadm-config -n kube-system
#在kubernetesVersion添加以下参数
controlPlaneEndpoint: 192.168.122.226:6443
#在备控创建证书存放目录
for i in {227..228};do ssh 192.168.122.$i "mkdir -p /etc/kubernetes/pki/etcd /root/.kube";done
#拷贝证书到备控
for i in {227..228};do scp /etc/kubernetes/pki/ca.* 192.168.122.$i:/etc/kubernetes/pki/;done
for i in {227..228};do scp /etc/kubernetes/pki/sa.*  192.168.122.$i:/etc/kubernetes/pki/;done
for i in {227..228};do scp /etc/kubernetes/pki/front-proxy-ca.*  192.168.122.$i:/etc/kubernetes/pki/;done
for i in {227..228};do scp /etc/kubernetes/pki/etcd/ca.*  192.168.122.$i:/etc/kubernetes/pki/etcd/;done
#备控加入集群
kubeadm join 192.168.122.70:6443 --token a8cuiy.qdia1l6zu4n4p09b --control-plane\
--discovery-token-ca-cert-hash sha256:e9a528e6dfca81f194a4e43bfd215148d3f08786e689ed50511ea87349f903a8 

11.下载安装cni-plugins插件
mkdir -p /opt/cni/bin
curl -LO https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-amd64-v1.5.1.tgz
tar -C /opt/cni/bin -zxf  /opt/k8s_packages/cni-plugins-linux-amd64-v1.5.1.tgz

12.下载Flannel/Calico插件配置网络
# flannel 
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

# calico
#旧版本
curl -O https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
sed -i 's#docker.io/calico#m.daocloud.io/docker.io/calico#g' calico.yaml
kubectl apply -f calico.yaml
#新版本
wget https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/custom-resources.yaml
kubectl apply -f calico.yaml

13.修改kube-proxy的iptales模式为ipvs模式(kubernetes1.11版本开始稳定使用ipvs)
curl 127.0.0.1:10249/proxyMode
kubectl edit cm -n kube-system kube-proxy
mode: "ipvs"
curl 127.0.0.1:10249/proxyMode
重启节点生效

14.配置文件路径(修改配置前注意备份)
ls /etc/kubernetes/manifests

15.验证集群部署成功
kubectl get nodes
kubectl get pods -n kube-system

16.验证集群网络正常
cat >> busybox.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: docker.io/library/busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF
kubectl apply -f busybox.yaml
kubectl get pod
kubectl get service
kubectl exec busybox -- nslookup kubernetes
kubectl exec -it  busybox 


17.安装失败后快速重置
#集群直接重置
kubeadm reset -f
iptables -F && ipvsadm -C
#单节点有问题可以剔除重新加入
kubectl drain <node> --delete-emptydir-data --force --ignore-daemonsets
kubectl delete node





二进制部署Kubernetes

Etcd是一个分布式的、高可用的、一致的key-value存储数据库,基于Go语言实现,主要用于共享配置和服务发现,Etcd会在集群各个节点中复制数据并保证数据始终正确。

/var/lib/etcd/default:默认的数据存放目录,下一级有两个文件夹snap和wal
snap:存放快照数据,存储etcd数据状态,防止wal文件过多而设置快照
wal:存放预写式日志,记录整个数据变化的全部历程,所有数据的修改在提交前都先写入wal中

Etcd存储数据库的特点

  1. 简单:提供简单易用的HTTP API
  2. 存储:类似日常使用的文件系统,数据以分层的形式存储在文件目录
  3. 安全通信:支持SSL证书验证,保证通信的安全性
  4. 一致可靠:基于Raft共识算法,实现分布式系统内部数据存储、服务调用的一致性和高可用性
  5. 完全复制:集群中每个节点都使用完整存档
  6. 高性能:写入基准速度高,读操作高
  7. 高可用:适用于集群部署避免单点故障
部署etcd集群
1.下载安装etcd二进制包
cd /opt
wget  https://github.com/etcd-io/etcd/releases/download/v3.5.15/etcd-v3.5.15-linux-amd64.tar.gz
tar -zxvf etcd-v3.5.15-linux-amd64
cp -p etcd-v3.5.15-linux-amd64/etcd* /usr/local/bin
scp -p etcd-v3.5.15-linux-amd64/etcd* k8s-master2-rocky:/usr/local/bin
scp -p etcd-v3.5.15-linux-amd64/etcd* k8s-master3-rocky:/usr/local/bin
mkdir -p /etc/etcd/ssl		#证书存放目录
mkdir -p /var/lib/etcd/default.etcd			#数据存放目录
cd /etc/etcd/ssl

2.创建CA证书
cat > ca-csr.json <<EOF
{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "GuangZhou",
      "O": "Kubernetes",
      "OU": "CA"
    }
  ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF

3.为etcd签发证书
cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "192.168.122.226",
    "192.168.122.227",
    "192.168.122.228",
    "192.168.122.219"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "GuangZhou",
      "O": "etcd",
      "OU": "etcd"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  etcd-csr.json | cfssljson -bare etcd

4.修改etcd配置文件
#请根据不同的etcd节点自行修改etcd名称和etcd节点ip
cat << EOF > /etc/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.122.226:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.122.226:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.122.226:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.122.226:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.122.226:2380,etcd2=https://192.168.122.227:2380,etcd3=https://192.168.122.228:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

#配置参数说明
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

5.修改服务管理文件
cat << EOF > /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd
ExecStart=/usr/local/bin/etcd \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

6.验证
systemctl daemon-reload && systemctl enable --now etcd
systemctl status etcd
export ETCDCTL_API=3
etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem  --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.122.226:2379,https://192.168.122.227:2379,https://192.168.122.228:2379 endpoint health
etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem  --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.122.226:2379,https://192.168.122.227:2379,https://192.168.122.228:2379 endpoint status

k8s组件安装
1.下载二进制包
#下载cfssl证书工具二进制文件并赋予执行权限
wget -c https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssl_1.6.5_linux_amd64 -O /usr/local/bin/cfssl
wget -c https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssljson_1.6.5_linux_amd64 -O /usr/local/bin/cfssljson
wget -c https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssl-certinfo_1.6.5_linux_amd64 -O /usr/local/bin/cfssl-certinfo
chmod +x  /usr/local/bin/cfssl*
cfssl version

#下载containerd工具二进制文件
wget https://hub.gitmirror.com/https://github.com/containerd/containerd/releases/download/v1.7.9/cri-containerd-cni-1.7.9-linux-amd64.tar.gz
tar -zxvf cri-containerd-cni-1.7.9-linux-amd64.tar.gz -C /

#下载k8s-server
wget https://dl.k8s.io/v1.30.2/kubernetes-server-linux-amd64.tar.gz
tar -zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager kubectl /usr/local/bin
scp kube-apiserver kube-scheduler kube-controller-manager kubectl [k8s-master2]:/usr/local/bin
scp kube-apiserver kube-scheduler kube-controller-manager kubectl [k8s-master3]:/usr/local/bin
scp kubelet kube-proxy k8s-node1-rocky:/usr/local/bin/
scp kubelet kube-proxy k8s-node2-rocky:/usr/local/bin/












Kubekey部署Kubernetes

1.所有节点安装依赖包
for i in {70..73};do ssh 192.168.122.$i "yum -y install socat conntrack -y";done

2.master节点安装kubkey工具
#设置区域
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.1.7 sh -

3.修改配置文件
#二主二从架构kubernetes集群
cat > config-sample.yaml << EOF
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: k8s-m1, address: 192.168.122.70, internalAddress: 192.168.122.70, user: root, password: 'Pass@2024'}
  - {name: k8s-m2, address: 192.168.122.71, internalAddress: 192.168.122.71, user: root, password: 'Pass@2024'}
  - {name: k8s-n1, address: 192.168.122.72, internalAddress: 192.168.122.72, user: root, password: 'Pass@2024'}
  - {name: k8s-n2, address: 192.168.122.73, internalAddress: 192.168.122.73, user: root, password: 'Pass@2024'}
  roleGroups:
    etcd:
    - k8s-m1
    - k8s-m2
    control-plane: 
    - k8s-m1
    - k8s-m2
    worker:
    - k8s-n1
    - k8s-n2
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.29.0
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []
EOF

4.创建集群
./kk create cluster -f config-sample.yaml

5.验证
kubectl get pod -A



Haproxy+Keepalived实现Kubernetes高可用

实现kube-apiserver高可用
haproxy+keepalive(安装过程忽略)
IP:192.168.122.226-228:6443
VIP:192.168.122.219

在双控制节点配置
1.haproxy配置
#主控备控配置一致
cat << EOF > /etc/haproxy/haproxy.cfg
global
 maxconn 2000
 log 127.0.0.1 local0 err
 stats timeout 30s

defaults
 log global
 mode http
 option httplog
 timeout connect 5000
 timeout client 50000
 timeout server 50000
 timeout http-request 15s
 timeout http-keep-alive 15s

listen monitor-in
 bind *:33355
 mode http
 option httplog
 monitor-uri /monitor

frontend k8s-master
 bind 0.0.0.0:9443
 bind 127.0.0.1:9443
 mode tcp
 option tcplog
 tcp-request inspect-delay 5s
 default_backend k8s-master

backend k8s-master
 mode tcp
 option tcplog
 option tcp-check
 balance roundrobin
 default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
 server  k8s-master1  192.168.122.226:6443 check
 server  k8s-master2  192.168.122.227:6443 check
 server  k8s-master3  192.168.122.228:6443 check

2.keepalived配置
#主控配置
cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    # 注意网卡名
    interface eth0 
    mcast_src_ip 192.168.122.226
    virtual_router_id 70
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.122.219
    }
    track_script {
      chk_apiserver 
    }
}

#备控1配置
cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiservice {
    script "/etc/keepalived/chk_apiservice.sh"
    interval 5 
    weight -5
    fall 2
    rise 1

}
vrrp_instance VI_1 {
    state BACKUP
    # 注意网卡名
    interface eth0
    mcast_src_ip 192.168.122.227
    virtual_router_id 70
    priority 80
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.122.218
    }
    track_script {
      chk_apiservice 
    }
}
#备控2配置
cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiservice {
    script "/etc/keepalived/chk_apiservice.sh"
    interval 5 
    weight -5
    fall 2
    rise 1

}
vrrp_instance VI_1 {
    state BACKUP
    # 注意网卡名
    interface eth0
    mcast_src_ip 192.168.122.228
    virtual_router_id 70
    priority 60
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.122.219
    }
    track_script {
      chk_apiservice 
    }
}


3.keepalived检查脚本chk_apiservice.sh配置
chmod +x /etc/keepalived/chk_apiservice.sh
cat << EOF > /etc/keepalived/chk_apiservice.sh
#!/bin/bash
#check kubernetes apiserver survive

check_nginx() {
	pgrep nginx > /dev/null 2>&1
	return $?
}
#check_keepalived() {
#	pgrep keepalived > /dev/null 2>&1
#	return $?
#}

if check_nginx;then
	for ((i=1;i<=3;i++));do
		sleep 1
		continue
		exit 0
	done
else
	echo "Stop Keepalived Service"
	systemctl stop keepalived
	exit=1
fi
EOF


4.验证
systemctl daemon-reload && systemctl enable --now haproxy keepalived
systemctl status haproxy keepalived
ip a show eth0 

<think>我们正在解决Spring Cloud Kubernetes配置中环境变量`SPRING_CLOUD_KUBERNETES_CONFIG_LABELS`的绑定错误问题。根据之前的讨论,我们知道这个环境变量需要绑定为`Map<String, String>`类型,但在部署时出现绑定错误。 根据引用[1]和[2],我们知道Spring Cloud Kubernetes Configuration Watcher用于监听ConfigMap的变化并重新加载应用程序上下文。同时,我们需要确保服务账户有正确的权限(get、list、watch ConfigMap和Secret等资源)。 问题分析: 环境变量`SPRING_CLOUD_KUBERNETES_CONFIG_LABELS`用于指定要监视的ConfigMap的标签选择器。在Kubernetes中,标签是键值对,因此这个环境变量应该是一个Map。但是,在Kubernetes的环境变量中,我们通常只能设置字符串值。因此,我们需要一种方式将多个键值对作为字符串传递,并让Spring Boot能够正确解析为Map。 解决方案: 根据Spring Boot的外部化配置规则,我们可以使用环境变量来设置属性。Spring Boot支持将环境变量名中的下划线转换为点,并且支持将多个键值对组合成一个Map。 具体步骤: 1. 在Kubernetes部署清单中,设置环境变量时,我们可以使用多个环境变量来构建一个Map。例如,我们可以使用`SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_KEY1=value1`,`SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_KEY2=value2`这样的形式。Spring Boot会自动将这些属性合并到一个Map中,其中键是`KEY1`、`KEY2`(注意:这里键会被转换为小写,除非使用`RelaxedBinding`)。 2. 但是,更常见的做法是使用一个环境变量,其值是一个以逗号分隔的键值对,或者使用多个环境变量通过索引来指定(如`SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_0_KEY=...`和`SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_0_VALUE=...`)。然而,Spring Boot 2.x版本支持使用`SPRING_APPLICATION_JSON`环境变量来传递复杂的JSON结构,但这可能不是最直接的方法。 3. 实际上,Spring Cloud Kubernetes的文档中提到了如何设置`spring.cloud.kubernetes.config.labels`属性。我们可以使用一个YAML格式的字符串来表示Map,然后在环境变量中设置这个字符串。但是,这需要我们在环境变量中放入多行字符串,这在Kubernetes环境变量中可能不太方便(因为环境变量通常是单行)。 4. 另一种方式是使用索引方式,例如: ``` - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_0_KEY value: "app" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_0_VALUE value: "myapp" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_1_KEY value: "env" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_1_VALUE value: "prod" ``` 这样,Spring Boot会将其绑定为一个Map:`{"app":"myapp", "env":"prod"}`。 5. 但是,根据Spring Boot的宽松绑定规则,我们也可以使用一个环境变量`SPRING_CLOUD_KUBERNETES_CONFIG_LABELS`,然后将其值设置为一个逗号分隔的键值对,每个键值对用冒号分隔,例如:`app:myapp,env:prod`。但是,这需要我们在应用程序中自定义转换器,因为Spring Boot默认不支持这种格式。 6. 实际上,查看Spring Cloud Kubernetes的源代码,我们发现它使用`@ConfigurationProperties`来绑定属性。对于Map类型,它默认支持使用索引方式(即上面第4点的方式)或者使用一个Properties文件(或YAML)来设置。因此,我们推荐使用索引方式(多个环境变量)来设置。 然而,由于环境变量在Kubernetes中设置多个条目比较繁琐,我们可以考虑使用ConfigMap来设置整个配置文件,而不是通过环境变量。 替代方案: 我们可以将配置放在一个ConfigMap中,然后通过挂载卷或者使用`spring.config.import`的方式引入。这样,我们可以在ConfigMap中直接以YAML或Properties格式定义复杂的结构。 但是,问题要求通过环境变量设置,所以我们还是回到环境变量的设置方式。 根据Spring Boot文档,对于Map类型的绑定,我们可以使用索引方式。例如,对于属性`spring.cloud.kubernetes.config.labels`,我们可以这样设置环境变量: ``` - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_APP value: "myapp" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_ENV value: "prod" ``` 注意:这里我们使用了`SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_`作为前缀,后面直接跟键(大写)。这样,Spring Boot会将其解析为一个Map,其中键为`app`(环境变量中的`APP`会被转换为小写),值为`myapp`;键`env`,值为`prod`。 或者,我们也可以使用标准的方式(带索引): ``` - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_[0]_KEY value: "app" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_[0]_VALUE value: "myapp" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_[1]_KEY value: "env" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_[1]_VALUE value: "prod" ``` 但是,注意:在环境变量中,方括号可能不被支持,所以我们可以使用下划线代替(实际上,Spring Boot支持使用下划线表示索引,并且忽略方括号)。所以我们可以这样写: ``` - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_0_KEY value: "app" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_0_VALUE value: "myapp" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_1_KEY value: "env" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_1_VALUE value: "prod" ``` 因此,在KubernetesDeployment YAML文件中,我们可以这样设置环境变量: ```yaml env: - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_0_KEY value: "app" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_0_VALUE value: "myapp" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_1_KEY value: "env" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_1_VALUE value: "prod" ``` 这样,Spring Boot就会将这些环境变量绑定为一个Map。 另外,我们也可以尝试使用一个环境变量,其值是一个JSON字符串,然后通过`SPRING_APPLICATION_JSON`环境变量来设置。例如: ``` - name: SPRING_APPLICATION_JSON value: '{"spring":{"cloud":{"kubernetes":{"config":{"labels":{"app":"myapp","env":"prod"}}}}}}' ``` 但这种方法可能会使环境变量变得很长,且不易维护。 总结: 推荐使用索引方式(多个环境变量)来设置Map类型的属性,这样符合Spring Boot的绑定规则,且清晰明了。 此外,确保服务账户有正确的权限(引用[2]): 我们需要为服务账户分配能够get、list和watch ConfigMap和Secret的权限。例如,创建一个ClusterRole和ClusterRoleBinding: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: spring-cloud-kubernetes rules: - apiGroups: [""] resources: ["configmaps", "secrets"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: spring-cloud-kubernetes subjects: - kind: ServiceAccount name: default # 使用你的服务账户名 namespace: default roleRef: kind: ClusterRole name: spring-cloud-kubernetes apiGroup: rbac.authorization.k8s.io ``` 注意:这里使用的服务账户名和命名空间需要根据实际情况调整。 最后,根据引用[4],我们也可以考虑使用配置挂载的方式,将ConfigMap挂载为卷,然后通过`spring.config.import`引入。这样,我们可以在ConfigMap中直接定义复杂的YAML结构,避免环境变量的复杂性。但这个问题中,用户要求通过环境变量设置,所以我们提供了上述解决方案。 因此,解决环境变量绑定错误的步骤为: 1. 在Deployment中,使用多个环境变量(索引方式)来设置`SPRING_CLOUD_KUBERNETES_CONFIG_LABELS`。 2. 确保服务账户有正确权限。 如果问题仍然存在,请检查Spring Cloud Kubernetes的版本,并查看其文档中关于属性绑定的最新说明。 现在,我们可以生成相关问题。</think>### 解决 SPRING_CLOUD_KUBERNETES_CONFIG_LABELS 环境变量绑定错误 #### 问题分析 当在 Spring Cloud Kubernetes Configuration Watcher 中设置 `SPRING_CLOUD_KUBERNETES_CONFIG_LABELS` 环境变量时,出现绑定错误的原因是:**Kubernetes 环境变量仅支持字符串类型,而 Spring Boot 期望将 `spring.cloud.kubernetes.config.labels` 绑定为 `Map<String, String>` 类型**[^1]。直接使用单行字符串赋值会导致类型转换失败。 --- #### 解决方案 ##### 方法1:使用索引化环境变量(推荐) 通过添加数字索引将 Map 拆分为多个键值对,符合 Spring Boot 宽松绑定规则: ```yaml # Kubernetes Deployment 配置示例 env: - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_0_KEY value: "app" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_0_VALUE value: "order-service" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_1_KEY value: "env" - name: SPRING_CLOUD_KUBERNETES_CONFIG_LABELS_1_VALUE value: "prod" ``` 这将正确绑定为: ```java Map<String, String> labels = { "app": "order-service", "env": "prod" } ``` ##### 方法2:通过 ConfigMap 挂载配置 避免环境变量类型问题,改用 ConfigMap 挂载: 1. 创建 ConfigMap ```yaml apiVersion: v1 kind: ConfigMap metadata: name: app-config data: application.yaml: | spring: cloud: kubernetes: config: labels: app: order-service env: prod ``` 2. 挂载到 Deployment ```yaml volumes: - name: config-volume configMap: name: app-config volumeMounts: - name: config-volume mountPath: /config env: - name: SPRING_CONFIG_IMPORT value: optional:/config/application.yaml ``` --- #### 权限配置 确保服务账户具备 ConfigMap 访问权限[^2]: ```yaml # RBAC 配置 apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: config-reader rules: - apiGroups: [""] resources: ["configmaps"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: config-reader-binding namespace: default subjects: - kind: ServiceAccount name: default roleRef: kind: Role name: config-reader apiGroup: rbac.authorization.k8s.io ``` --- #### 最佳实践建议 1. **优先使用 ConfigMap 挂载**:更符合云原生理念,避免环境变量类型限制[^4] 2. **索引从 0 开始**:确保索引连续(0,1,2...) 3. **权限最小化**:仅授予必要的 ConfigMap 访问权限[^2] 4. **标签命名规范**:使用小写字母和连字符(如 `app-tier: backend`) > 对于需要配置历史、版本管理等高级功能场景,可考虑通过中间服务将外部配置同步到 ConfigMap[^4],但会增加架构复杂度。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值