前言
本篇Kubernetes双Master负载均衡集群搭建是基于Kubernetes单Master集群二进制搭建的基础上进行搭建的。
环境准备
节点 | 主机名 | 系统 | IP地址 | 服务 |
---|---|---|---|---|
K8S集群Master01 | master01 | CentOS 7 | 192.168.121.88 | kube-apiserver kube-controller-manager kube-scheduler etcd |
K8S集群Master02 | master02 | CentOS 7 | 192.168.121.77 | kube-apiserver kube-controller-manager kube-scheduler etcd |
K8S集群Node01 | node01 | CentOS 7 | 192.168.121.55 | kubelet kube-proxy docker flannel |
K8S集群Node01 | node02 | CentOS 7 | 192.168.121.66 | kubelet kube-proxy docker flannel |
Etcd集群节点01 | —— | CentOS 7 | 192.168.121.55 | etcd |
Etcd集群节点02 | —— | CentOS 7 | 192.168.121.66 | etcd |
Etcd集群节点03 | —— | CentOS 7 | 192.168.121.88 | etcd |
负载均衡Nginx+Keepalived(Master) | lb01 | CentOS 7 | 192.168.121.11 | nginx,keepalived |
负载均衡Nginx+Keepalived(Backup) | lb02 | CentOS 7 | 192.168.121.22 | nginx,keepalived |
#Master02节点操作 (192.168.121.77)
hostnamectl set-hostname master02
#lb01节点操作(192.168.121.11)
hostnamectl set-hostname lb01
#lb02节点操作 (192.168.121.22)
hostnamectl set-hostname lb02
systemctl stop firewalld
setenforce 0
一.Master02 节点部署
1.在Master01节点上拷贝证书文件及服务管理文件
在master01 节点上操作
//从 master01 节点上拷贝证书文件、各master组件的配置文件和服务管理文件到 master02 节点
scp -r /opt/etcd/ root@192.168.121.77:/opt/
scp -r /opt/kubernetes/ root@192.168.121.77:/opt
scp /usr/lib/systemd/system/{
kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.121.77:/usr/lib/systemd/system/
2.修改Apiserver配置文件
在master02 节点上操作
//修改配置文件kube-apiserver中的IP
vim /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.121.88:2379,https://192.168.121.55:2379,https://192.168.121.66:2379 \
--bind-address=192.168.121.77 \ #修改
--secure-port=6443 \
--advertise-address=192.168.121.77 \ #修改
......
3.启动服务
在master02 节点上操作
cd /opt/kubernetes/bin
ln -s /opt/kubernetes/bin/* /usr/local/bin
//在 master02 节点上启动各服务并设置开机自启
systemctl start kube-apiserver.service
systemctl enable kube-apiserver.service
systemctl start kube-controller-manager.service
systemctl enable kube-controller-manager.service
systemctl start kube-scheduler.service
systemctl enable kube-scheduler.service
4.查看Node节点状态
在master02 节点上操作
//查看node节点状态
kubectl get nodes
kubectl get nodes -o wide #-o=wide:输出额外信息;对于Pod,将输出Pod所在的Node名
//此时在master02节点查到的node节点状态仅是从etcd查询到的信息,而此时node节点实际上并未与master02节点建立通信连接,因此需要使用一个VIP把node节点与master节点都关联起来