部署 haproxy+keepalived
本次部署一个三节点高可用 haproxy+keepalived 集群,分别为:192.168.21.224、192.168.21.225、192.168.21.226。VIP 地址 192.168.17.21
安装 haproxy+keepalived
yum install -y haproxy keepalived
注: 3台 haproxy+keepalived 节点都需安装
配置 keepalived
节点1:192.168.21.224
cat > keepalived.conf <<EOF
global_defs {
router_id 192.168.21.224
}
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 3
}
vrrp_instance VI-kube-master {
state MASTER
priority 120
dont_track_primary
interface eth0
virtual_router_id 68
advert_int 3
track_script {
check-haproxy
}
virtual_ipaddress {
192.168.17.21
}
}
EOF
- VIP 所在的接口(interface ${VIP_IF})为 eth0;
- 使用 killall -0 haproxy 命令检查所在节点的 haproxy 进程是否正常。如果异常则将权重减少(-30),从而触发重新选主过程;
- router_id、virtual_router_id 用于标识属于该 HA 的 keepalived 实例,如果有多套 keepalived HA,则必须各不相同;
节点2:192.168.21.225
cat > keepalived.conf <<EOF
global_defs {
router_id 192.168.21.225
}
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 3
}
vrrp_instance VI-kube-master {
state BACKUP
priority 110
dont_track_primary
interface
virtual_router_id 68
advert_int 3
track_script {
check-haproxy
}
virtual_ipaddress {
192.168.17.21
}
}
EOF
节点3:192.168.21.226
cat > keepalived.conf <<EOF
global_defs {
router_id 192.168.21.225
}
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 3
}
vrrp_instance VI-kube-master {
state BACKUP
priority 110
dont_track_primary
interface
virtual_router_id 68
advert_int 3
track_script {
check-haproxy
}
virtual_ipaddress {
192.168.17.21
}
}
EOF
注意:
- VIP 所在的接口(interface ${VIP_IF})为 eth0;
- 使用 killall -0 haproxy 命令检查所在节点的 haproxy 进程是否正常。如果异常则将权重减少(-30),从而触发重新选主过程;
- router_id、virtual_router_id 用于标识属于该 HA 的 keepalived 实例,如果有多套 keepalived HA,则必须各不相同;
- priority 的值必须小于 master 的值;
检测脚本 vi /etc/keepalived/check_haproxy.sh
#!/bin/bash
flag=$(systemctl status haproxy &> /dev/null;echo $?)
if [[ $flag != 0 ]];then
echo "haproxy is down,close the keepalived"
systemctl stop keepalived
fi
修改keepalived启动文件 vi /usr/lib/systemd/system/keepalived.service 以下部分:
[Unit]
Description=LVS and VRRP High Availability Monitor
After=syslog.target network-online.target haproxy.service
Requires=haproxy.service
- keepalived配置文件三台主机基本一样,除了state,主节点配置为MASTER,备节点配置BACKUP,优化级参数priority,主节点设置最高,备节点依次递减
- 自定义的检测脚本作用是检测本机haproxy服务状态,如果不正常就停止本机keepalived,释放VIP
- 这里没有考虑keepalived脑裂的问题,后期可以在脚本中加入相关检测
配置 haproxy
3台节点配置一样
cat > haproxy.cfg <<EOF
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /var/run/haproxy-admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
nbproc 1
defaults
log global
timeout connect 5000
timeout client 10m
timeout server 10m
listen admin_stats
bind 0.0.0.0:10080
mode http
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /status
stats realm welcome login\ Haproxy
stats auth admin:123456
stats hide-version
stats admin if TRUE
listen kube-master
bind 0.0.0.0:8443
mode tcp
option tcplog
balance source
server 192.168.21.224 192.168.21.224:6443 check inter 2000 fall 2 rise 2 weight 1
server 192.168.21.225 192.168.21.225:6443 check inter 2000 fall 2 rise 2 weight 1
server 192.168.21.226 192.168.21.226:6443 check inter 2000 fall 2 rise 2 weight 1
EOF
- haproxy 在 10080 端口输出 status 信息;
- haproxy 监听所有接口的 8443 端口,该端口与环境变量 ${KUBE_APISERVER} 指定的端口必须一致;
- server 字段列出所有 kube-apiserver 监听的 IP 和端口;
启动 haproxy+keepalived
3个节点都启动
systemctl daemon-reload
systemctl enable haproxy
systemctl enable keepalived
systemctl start haproxy
systemctl start keepalived
如果没有什么报错,那应该就可以在主节点 192.168.21.224 上面看到eth0网卡已绑定VIP: 192.168.17.21
[root@c821v224 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:d4:b7:63 brd ff:ff:ff:ff:ff:ff
inet 192.168.21.224/20 brd 192.168.31.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.17.21/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fed4:b763/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:ee:85:b2:80 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:eeff:fe85:b280/64 scope link
valid_lft forever preferred_lft forever
查看 haproxy 状态页面
浏览器访问 ${MASTER_VIP}:10080/status 地址,查看 haproxy 状态页面: