部署方式
有几下几种部署方式:
minikube:一个用于快速搭建单节点的kubernetes工具
kubeadm:一个用于快速搭建kubernetes集群的工具
二进制包:从官网上下载每个组件的二进制包,依次去安装
这里我们选用kubeadm方式进行安装
集群规划
Kubernetes有一主多从或多主多从的集群部署方式,这里我们采用一主多从的方式
服务器名称 | IP | 角色 | cpu(最低要求) | 内存(最低要求) |
---|---|---|---|---|
k8s-master | 192.168.3.219 | master | 2核 | 2G |
k8s-node1 | 192.168.3.220 | node | 2核 | 2G |
k8s-node2 | 192.168.3.242 | node | 2核 | 2G |
containerd安装
containerd和kubernetes版本对应关系,参考:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md
https://github.com/kubernetes/kubernetes/blob/master/build/dependencies.yaml
containerd的安装,请参考:containerd 安装
安装k8s集群
基础环境
禁用selinux
临时禁用方法
[root@k8s-master ~]# setenforce 0
[root@k8s-master ~]# getenforce
Permissive
[root@k8s-master ~]#
永久禁用方法。需重启服务器
[root@k8s-master ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
关闭swap
swap分区指的是虚拟内存分区,它的作用是在物理内存使用完之后,将磁盘空间虚拟成内存来使用。但是会对系统性能产生影响。所以这里需要关闭。如果不能关闭,则在需要修改集群的配置参数
临时关闭方法
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# free -m
total used free shared buff/cache available
Mem: 1819 286 632 9 900 1364
Swap: 0 0 0
[root@k8s-master ~]#
永久关闭方法。需重启服务器
[root@k8s-master ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab
bridged网桥设置
为了让服务器的iptables能发现bridged traffic,需要添加网桥过滤和地址转发功能
新建modules-load.d/k8s.conf文件
[root@k8s-master ~]# cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
[root@k8s-master ~]#
新建sysctl.d/k8s.conf文件
[root@k8s-master ~]# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@k8s-master ~]#
加载配置文件
[root@k8s-master ~]# sysctl --system
加载br_netfilter网桥过滤模块,和加载网络虚拟化技术模块
[root@k8s-master ~]# modprobe br_netfilter
[root@k8s-master ~]# modprobe overlay
检验网桥过滤模块是否加载成功
[root@k8s-master ~]# lsmod | grep -e br_netfilter -e overlay
br_netfilter 22256 0
bridge 151336 1 br_netfilter
overlay 91659 0
[root@k8s-master ~]#
配置IPVS
service有基于iptables和基于ipvs两种代理模型。基于ipvs的性能要高一些。需要手动载入才能使用ipvs模块
安装ipset和ipvsadm
[root@k8s-master ~]# yum install ipset ipvsadm
新建脚本文件/etc/sysconfig/modules/ipvs.modules,内容如下
[root@k8s-master ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
[root@k8s-master ~]#
添加执行权限给脚本文件,然后执行脚本文件
[root@k8s-master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@k8s-master ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules
[root@k8s-master ~]#
检验模块是否加载成功
[root@k8s-master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145458 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4 15053 2
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
nf_conntrack 139264 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
[root@k8s-master ~]#
安装kubelet、kubeadm、kubectl
添加yum源
[root@k8s-master ~]# cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-master ~]#
安装,然后启动kubelet
[root@k8s-master ~]# yum install -y --setopt=obsoletes=0 kubelet-1.24.0 kubeadm-1.24.0 kubectl-1.24.0
[root@k8s-master ~]# systemctl enable kubelet --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-master ~]#
说明如下:
obsoletes等于1表示更新旧的rpm包的同时会删除旧包,0表示更新旧的rpm包不会删除旧包
kubelet启动后,可以用命令journalctl -f -u kubelet查看kubelet更详细的日志
kubelet默认使用systemd作为cgroup driver
启动后,kubelet现在每隔几秒就会重启,因为它陷入了一个等待kubeadm指令的死循环
下载各个机器需要的镜像
查看集群所需镜像的版本
[root@k8s-master ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.24.0
k8s.gcr.io/kube-controller-manager:v1.24.0
k8s.gcr.io/kube-scheduler:v1.24.0
k8s.gcr.io/kube-proxy:v1.24.0
k8s.gcr.io/pause:3.7
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/coredns/coredns:v1.8.6
[root@k8s-master ~]#
编辑镜像下载文件images.sh,然后执行。其中node节点只需要kube-proxy和pause
[root@k8s-master ~]# tee ./images.sh <<'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.24.0
kube-controller-manager:v1.24.0
kube-scheduler:v1.24.0
kube-proxy:v1.24.0
pause:3.7
etcd:3.5.3-0
coredns:v1.8.6
)
for imageName in ${images[@]} ; do
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
EOF
[root@k8s-master ~]#
[root@k8s-master ~]# chmod +x ./images.sh && ./images.sh
初始化主节点(只在master节点执行)
[root@k8s-master ~]# kubeadm init \
--apiserver-advertise-address=192.168.3.219 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.24.0 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
......省略部分......
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join k8s-master:6443 --token yzicfs.d50rrfxpd3a0wokb \
--discovery-token-ca-cert-hash sha256:8548affb49155f0ba53a0ac4eb9500a060ee3b4076ad8b66bd973bab92d78103 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s-master:6443 --token yzicfs.d50rrfxpd3a0wokb \
--discovery-token-ca-cert-hash sha256:8548affb49155f0ba53a0ac4eb9500a060ee3b4076ad8b66bd973bab92d78103
[root@k8s-master ~]#
说明:
可以使用参数–v=6或–v=10等查看详细的日志
所有参数的网络不能重叠。比如192.168.2.x和192.168.3.x是重叠的
–pod-network-cidr:指定pod网络的IP地址范围。直接填写这个就可以了
–service-cidr:service VIP的IP地址范围。默认就10.96.0.0/12。直接填写这个就可以了
–apiserver-advertise-address:API Server监听的IP地址
安装网络插件calico(只在master执行)
calico版本选择,参考
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md
插件使用的是DaemonSet的控制器,会在每个节点都运行
下载calico.yaml文件
[root@k8s-master ~]# curl https://docs.projectcalico.org/archive/v3.19/manifests/calico.yaml -O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 185k 100 185k 0 0 81914 0 0:00:02 0:00:02 --:--:-- 81966
[root@k8s-master ~]#
修改内容
将下面的部分
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
修改成下面的部分。其中IP为kubeadm init时候pod-network-cidr的IP
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
查看需要的镜像
[root@k8s-master ~]# cat calico.yaml | grep image
image: docker.io/calico/cni:v3.19.4
image: docker.io/calico/cni:v3.19.4
image: docker.io/calico/pod2daemon-flexvol:v3.19.4
image: docker.io/calico/node:v3.19.4
image: docker.io/calico/kube-controllers:v3.19.4
[root@k8s-master ~]#
编辑镜像下载文件,然后执行脚本文件
[root@k8s-master ~]# tee ./calicoImages.sh <<'EOF'
#!/bin/bash
images=(
docker.io/calico/cni:v3.19.4
docker.io/calico/pod2daemon-flexvol:v3.19.4
docker.io/calico/node:v3.19.4
docker.io/calico/kube-controllers:v3.19.4
)
for imageName in ${images[@]} ; do
crictl pull $imageName
done
EOF
[root@k8s-master ~]#
[root@k8s-master ~]# chmod +x ./calicoImages.sh && ./calicoImages.sh
会下载calico/node、calico/pod2daemon-flexvol、calico/cni、calico/kube-controllers四个镜像
部署calico
[root@k8s-master ~]# kubectl apply -f calico.yaml
此时查看master的状态
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-57d95cb479-5zppz 0/1 Pending 0 14s
kube-system calico-node-v6zcv 1/1 Running 0 14s
kube-system coredns-7f74c56694-snzmv 1/1 Running 0 71s
kube-system coredns-7f74c56694-whh84 1/1 Running 0 71s
kube-system etcd-k8s-master 1/1 Running 0 84s
kube-system kube-apiserver-k8s-master 1/1 Running 0 84s
kube-system kube-controller-manager-k8s-master 1/1 Running 0 83s
kube-system kube-proxy-f9w7h 1/1 Running 0 71s
kube-system kube-scheduler-k8s-master 1/1 Running 0 84s
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 105s v1.24.0
[root@k8s-master ~]#
加入node节点(只在node执行)
由上面的kubeadm init成功后的结果得来的
[root@k8s-node1 ~]# kubeadm join k8s-master:6443 --token yzicfs.d50rrfxpd3a0wokb \
--discovery-token-ca-cert-hash sha256:8548affb49155f0ba53a0ac4eb9500a060ee3b4076ad8b66bd973bab92d78103
[preflight] Running pre-flight checks
......省略部分......
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node1 ~]#
令牌有效期24小时,可以在master节点生成新令牌命令
[root@k8s-master ~]# kubeadm token create --print-join-command
此时查看master的状态
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-57d95cb479-5zppz 1/1 Running 0 2m35s
kube-system calico-node-2m8xb 1/1 Running 0 37s
kube-system calico-node-jnll4 1/1 Running 0 35s
kube-system calico-node-v6zcv 1/1 Running 0 2m35s
kube-system coredns-7f74c56694-snzmv 1/1 Running 0 3m32s
kube-system coredns-7f74c56694-whh84 1/1 Running 0 3m32s
kube-system etcd-k8s-master 1/1 Running 0 3m45s
kube-system kube-apiserver-k8s-master 1/1 Running 0 3m45s
kube-system kube-controller-manager-k8s-master 1/1 Running 0 3m44s
kube-system kube-proxy-9gc7d 1/1 Running 0 35s
kube-system kube-proxy-f9w7h 1/1 Running 0 3m32s
kube-system kube-proxy-s8rwk 1/1 Running 0 37s
kube-system kube-scheduler-k8s-master 1/1 Running 0 3m45s
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 4m11s v1.24.0
k8s-node1 Ready <none> 61s v1.24.0
k8s-node2 Ready <none> 59s v1.24.0
[root@k8s-master ~]#
node节点可以执行kubectl命令方法
在master节点上将 H O M E / . k u b e 复制到 n o d e 节点的 HOME/.kube复制到node节点的 HOME/.kube复制到node节点的HOME目录下
[root@k8s-master ~]# scp -r $HOME/.kube k8s-node1:$HOME
[root@k8s-master ~]#
可视化工具部署
kuboard是一款免费的 Kubernetes 图形化管理工具,Kuboard 力图帮助用户快速在 Kubernetes 上落地微服务。