一、Kubernetes节点上线和下线
1.新节点上线
1)准备工作
关闭防火墙firewalld、selinux
设置主机名
设置/etc/hosts
关闭swap
swapoff -a
永久关闭,vi /etc/fstab 注释掉swap那行
将桥接的ipv4流量传递到iptables链
modprobe br_netfilter ##生成bridge相关内核参数
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效
打开端口转发
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sysctl -p
时间同步
yum install -y chrony;
systemctl start chronyd;
systemctl enable chronyd
2)安装containerd
先安装yum-utils工具
yum install -y yum-utils
配置Docker官方的yum仓库,如果做过,可以跳过
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
安装containerd
yum install containerd.io -y
启动服务
systemctl enable containerd
systemctl start containerd
生成默认配置
containerd config default > /etc/containerd/config.toml
修改配置
vi /etc/containerd/config.toml
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9" # 修改为阿里云镜像地址
SystemdCgroup = true # 使用
systemd cgroup
重启containerd服务
systemctl restart containerd
3)配置kubernetes仓库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
说明:kubernetes用的是RHEL7的源,和8是通用的
4)安装kubeadm和kubelet
yum install -y kubelet-1.27.2 kubeadm-1.27.2 kubectl-1.27.2
启动kubelet服务
systemctl start kubelet.service
systemctl enable kubelet.service
5)设置crictl连接 containerd
crictl config --set runtime-endpoint=unix:///run/containerd/containerd.sock
6)到master节点上,获取join token
kubeadm token create --print-join-command
7)到新节点,加入集群
kubeadm join 192.168.222.101:6443 --token uzopz7.ryng3lkdh2qwvy89 --discovery-token-cacert-hash sha256:1a1fca6d1ffccb4f48322d706ea43ea7b3ef2194483699952178950b52fe2601
8)master上查看node信息
kubectl get node
2. 节点下线
1)下线之前,先创建一个测试Deployment
命令行创建deployment,指定Pod副本为7
kubectl create deployment testdp2 --image=nginx:1.23.2 --replicas=7
查看Pod
kubectl get po -o wide
2)驱逐下线节点上的Pod,并设置不可调度(aminglinux01上执行)
kubectl drain aminglinux04 --ignore-daemonsets
3)恢复可调度(aminglinux01上执行)
kubectl uncordon aminglinux04
4)移除节点
kubectl delete node aminglinux04
二、Kubernetes高可用集群搭建(堆叠etcd模式)
堆叠etcd集群指的是,etcd和Kubernetes其它组件共用一台主机。
高可用思路:
1)使用keepalived+haproxy实现高可用+负载均衡
2)apiserver、Controller-manager、scheduler三台机器分别部署一个节点,共三个节点
3)etcd三台机器三个节点实现集群模式
机器准备(操作系统Rocky8.7):
主机名 | IP | 安装组件 |
k8s-master01 | 192.168.100.11 | etcd、apiserver、Controller-manager、scheduller、 keepalived、haproxy、kubelet、containerd、kubeadm |
k8s-master02 | 192.168.100.12 | etcd、apiserver、Controller-manager、scheduller、 keepalived、haproxy、kubelet、containerd、kubeadm |
k8s-master03 | 192.168.100.13 | etcd、apiserver、Controller-manager、scheduller、 keepalived、haproxy、kubelet、containerd、kubeadm |
k8s-node01 | 192.168.100.14 | kubelet、containerd、kubeadm |
k8s-node01 | 192.168.100.15 | kubelet、containerd、kubeadm |
-- | 192.168.100.200 |
1.准备工作
说明:5台机器都做
1)关闭防火墙firewalld、selinux
[root@bogon ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
[root@bogon ~]# systemctl disable firewalld; systemctl stop firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
2)设置主机名
hostnamectl set-hostname k8s-master01
3)设置/etc/hosts
echo "192.168.100.11 k8s-master01" >> /etc/hosts
echo "192.168.100.12 k8s-master01" >> /etc/hosts
echo "192.168.100.13 k8s-master01" >> /etc/hosts
echo "192.168.100.14 k8s-node01" >> /etc/hosts
echo "192.168.100.15 k8s-node02" >> /etc/hosts
4)关闭swap
swapoff -a
永久关闭, 注释掉swap那行
/dev/mapper/rl-root / xfs defaults 0 0
UUID=784fb296-c00c-4615-a4a6-583ae0156b04 /boot xfs defaults 0 0
#/dev/mapper/rl-swap none swap defaults 0 0
5)将桥接的ipv4流量传递到iptables链
modprobe br_netfilter ##生成bridge相关内核参数
[root@bogon ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> EOF
sysctl --system # 生效[root@bogon ~]# sysctl --system
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-coredump.conf ...
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %