1. keepalived简介
1.1 keepalived是什么?
Keepalived 软件起初是专为LVS负载均衡软件设计的,用来管理并监控LVS集群系统中各个服务节点的状态,后来又加入了可以实现高可用的VRRP功能。因此,Keepalived除了能够管理LVS软件外,还可以作为其他服务(例如:Nginx、Haproxy、MySQL等)的高可用解决方案软件。
Keepalived软件主要是通过VRRP协议实现高可用功能的。VRRP是Virtual Router RedundancyProtocol(虚拟路由器冗余协议)的缩写,VRRP出现的目的就是为了解决静态路由单点故障问题的,它能够保证当个别节点宕机时,整个网络可以不间断地运行。所以,Keepalived 一方面具有配置管理LVS的功能,同时还具有对LVS下面节点进行健康检查的功能,另一方面也可实现系统网络服务的高可用功能。
Keepalived官网:Keepalived for Linux
1.2 keepalived的重要功能
keepalived 有三个重要的功能,分别是:
- 管理LVS负载均衡软件
- 实现LVS集群节点的健康检查
- 作为系统网络服务的高可用性(failover)
1.3 keepalived高可用故障转移的原理
Keepalived 高可用服务之间的故障切换转移,是通过 VRRP (Virtual Router Redundancy Protocol ,虚拟路由器冗余协议)来实现的。
在 Keepalived 服务正常工作时,主 Master 节点会不断地向备节点发送(多播的方式)心跳消息,用以告诉备 Backup 节点自己还活看,当主 Master 节点发生故障时,就无法发送心跳消息,备节点也就因此无法继续检测到来自主 Master 节点的心跳了,于是调用自身的接管程序,接管主 Master 节点的 IP 资源及服务。而当主 Master 节点恢复时,备 Backup 节点又会释放主节点故障时自身接管的IP资源及服务,恢复到原来的备用角色。
那么,什么是VRRP呢?
VRRP ,全 称 Virtual Router Redundancy Protocol ,中文名为虚拟路由冗余协议 ,VRRP的出现就是为了解决静态踣甶的单点故障问题,VRRP是通过一种竞选机制来将路由的任务交给某台VRRP路由器的。
1.4 keepalived原理
1.4.1 keepalived高可用构架图
1.4.2 keepalived工作原理描述
Keepalived高可用对之间是通过VRRP通信的,因此,我们从 VRRP开始了解起:
- VRRP,全称 Virtual Router Redundancy Protocol,中文名为虚拟路由冗余协议,VRRP的出现是为了解决静态路由的单点故障。
- VRRP是通过一种竟选协议机制来将路由任务交给某台 VRRP路由器的。
- VRRP用 IP多播的方式(默认多播地址(224.0_0.18))实现高可用对之间通信。
- 工作时主节点发包,备节点接包,当备节点接收不到主节点发的数据包的时候,就启动接管程序接管主节点的开源。备节点可以有多个,通过优先级竞选,但一般 Keepalived系统运维工作中都是一对。
- VRRP使用了加密协议加密数据,但Keepalived官方目前还是推荐用明文的方式配置认证类型和密码。
介绍完 VRRP,接下来我再介绍一下Keepalived服务的工作原理:
Keepalived高可用是通过 VRRP 进行通信的, VRRP是通过竞选机制来确定主备的,主的优先级高于备,因此,工作时主会优先获得所有的资源,备节点处于等待状态,当主挂了的时候,备节点就会接管主节点的资源,然后顶替主节点对外提供服务。
在 Keepalived 服务之间,只有作为主的服务器会一直发送 VRRP 广播包,告诉备它还活着,此时备不会枪占主,当主不可用时,即备监听不到主发送的广播包时,就会启动相关服务接管资源,保证业务的连续性.接管速度最快可以小于1秒。
3 脑裂
在高可用(HA)系统中,当联系2个节点的“心跳线”断开时,本来为一整体、动作协调的HA系统,就分裂成为2个独立的个体。由于相互失去了联系,都以为是对方出了故障。两个节点上的HA软件像“裂脑人”一样,争抢“共享资源”、争起“应用服务”,就会发生严重后果——或者共享资源被瓜分、2边“服务”都起不来了;或者2边“服务”都起来了,但同时读写“共享存储”,导致数据损坏(常见如数据库轮询着的联机日志出错)。
对付HA系统“裂脑”的对策,目前达成共识的的大概有以下几条:
- 添加冗余的心跳线,例如:双线条线(心跳线也HA),尽量减少“裂脑”发生几率;
- 启用磁盘锁。正在服务一方锁住共享磁盘,“裂脑”发生时,让对方完全“抢不走”共享磁盘资源。但使用锁磁盘也会有一个不小的问题,如果占用共享盘的一方不主动“解锁”,另一方就永远得不到共享磁盘。现实中假如服务节点突然死机或崩溃,就不可能执行解锁命令。后备节点也就接管不了共享资源和应用服务。于是有人在HA中设计了“智能”锁。即:正在服务的一方只在发现心跳线全部断开(察觉不到对端)时才启用磁盘锁。平时就不上锁了。
- 设置仲裁机制。例如设置参考IP(如网关IP),当心跳线完全断开时,2个节点都各自ping一下参考IP,不通则表明断点就出在本端。不仅“心跳”、还兼对外“服务”的本端网络链路断了,即使启动(或继续)应用服务也没有用了,那就主动放弃竞争,让能够ping通参考IP的一端去起服务。更保险一些,ping不通参考IP的一方干脆就自我重启,以彻底释放有可能还占用着的那些共享资源
3.1 脑裂产生的原因
一般来说,脑裂的发生,有以下几种原因:
- 高可用服务器对之间心跳线链路发生故障,导致无法正常通信
- 因心跳线坏了(包括断了,老化)
- 因网卡及相关驱动坏了,ip配置及冲突问题(网卡直连)
- 因心跳线间连接的设备故障(网卡及交换机)
- 因仲裁的机器出问题(采用仲裁的方案)
- 高可用服务器上开启了 iptables防火墙阻挡了心跳消息传输
- 高可用服务器上心跳网卡地址等信息配置不正确,导致发送心跳失败
- 其他服务配置不当等原因,如心跳方式不同,心跳广插冲突、软件Bug等
注意:
Keepalived配置里同一 VRRP实例如果 virtual_router_id两端参数配置不一致也会导致裂脑问题发生。
3.2 脑裂的常见解决方案
在实际生产环境中,我们可以从以下几个方面来防止裂脑问题的发生:
- 同时使用串行电缆和以太网电缆连接,同时用两条心跳线路,这样一条线路坏了,另一个还是好的,依然能传送心跳消息
- 当检测到裂脑时强行关闭一个心跳节点(这个功能需特殊设备支持,如Stonith、feyce)。相当于备节点接收不到心跳消患,通过单独的线路发送关机命令关闭主节点的电源
- 做好对裂脑的监控报警(如邮件及手机短信等或值班).在问题发生时人为第一时间介入仲裁,降低损失。例如,百度的监控报警短信就有上行和下行的区别。报警消息发送到管理员手机上,管理员可以通过手机回复对应数字或简单的字符串操作返回给服务器.让服务器根据指令自动处理相应故障,这样解决故障的时间更短.
当然,在实施高可用方案时,要根据业务实际需求确定是否能容忍这样的损失。对于一般的网站常规业务.这个损失是可容忍的
3.3 对脑裂进行监控
对脑裂的监控应在备用服务器上进行,通过添加zabbix自定义监控进行。
监控什么信息呢?监控备上有无VIP地址
备机上出现VIP有两种情况:
- 发生了脑裂
- 正常的主备切换
监控只是监控发生脑裂的可能性,不能保证一定是发生了脑裂,因为正常的主备切换VIP也是会到备上的。
监控脚本如下:
[root@slave ~]# mkdir -p /scripts && cd /scripts
[root@slave scripts]# vim check_keepalived.sh
#!/bin/bash
if [ `ip a show ens33 |grep 172.16.12.250|wc -l` -ne 0 ]
then
echo "keepalived is error!"
else
echo "keepalived is OK !"
fi
4. keepalived实现nginx负载均衡机高可用
环境说明
系统信息 | 主机名 | IP |
---|---|---|
centos8 | master | 192.168.79.140 |
centos8 | slave | 192.168.79.141 |
本次高可用虚拟IP(VIP)地址暂定为192.168.79.250 |
4.1 keepalived安装
配置主keepalived
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master ~]# getenforce
Disabled
[root@master ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2495 100 2495 0 0 20966 0 --:--:-- --:--:-- --:--:-- 20966
[root@master ~]# yum install -y https://mirrors.aliyun.com/epel/epel-release-latest-8.noarch.rpm
Failed to set locale, defaulting to C.UTF-8
Repository extras is listed more than once in the configuration
CentOS-8.5.2111 - Base - mirrors.aliyun.com 1.6 MB/s | 4.6 MB 00:02
CentOS-8.5.2111 - Extras - mirrors.aliyun.com 206 kB/s | 10 kB 00:00
CentOS-8.5.2111 - AppStream - mirrors.aliyun.com 1.1 MB/s | 8.4 MB 00:07
epel-release-latest-8.noarch.rpm 301 kB/s | 24 kB 00:00
Dependencies resolved.
==================================================================================
Package Architecture Version Repository Size
==================================================================================
Installing:
epel-release noarch 8-17.el8 @commandline 24 k
Transaction Summary
==================================================================================
Install 1 Package
Total size: 24 k
Installed size: 34 k
Downloading Packages:
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : epel-release-8-17.el8.noarch 1/1
Running scriptlet: epel-release-8-17.el8.noarch 1/1
Many EPEL packages require the CodeReady Builder (CRB) repository.
It is recommended that you run /usr/bin/crb enable to enable the CRB repository.
Verifying : epel-release-8-17.el8.noarch 1/1
Installed products updated.
Installed:
epel-release-8-17.el8.noarch
Complete!
[root@master ~]# sed -i 's|^#baseurl=https://download.example/pub|baseurl=https://mirrors.aliyun.com|' /etc/yum.repos.d/epel*
[root@master ~]# sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel*
[root@master ~]# yum -y install keepalived vim wget gcc gcc-c++
同同样的方法在备服务器上安装keepalived
[root@slave ~]# systemctl stop firewalld
[root@slave ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@slave ~]# getenforce
Disabled
[root@backup ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2495 100 2495 0 0 20966 0 --:--:-- --:--:-- --:--:-- 20966
[root@slave ~]# yum install -y https://mirrors.aliyun.com/epel/epel-release-latest-8.noarch.rpm
Failed to set locale, defaulting to C.UTF-8
Repository extras is listed more than once in the configuration
CentOS-8.5.2111 - Base - mirrors.aliyun.com 1.6 MB/s | 4.6 MB 00:02
CentOS-8.5.2111 - Extras - mirrors.aliyun.com 206 kB/s | 10 kB 00:00
CentOS-8.5.2111 - AppStream - mirrors.aliyun.com 1.1 MB/s | 8.4 MB 00:07
epel-release-latest-8.noarch.rpm 301 kB/s | 24 kB 00:00
Dependencies resolved.
==================================================================================
Package Architecture Version Repository Size
==================================================================================
Installing:
epel-release noarch 8-17.el8 @commandline 24 k
Transaction Summary
==================================================================================
Install 1 Package
Total size: 24 k
Installed size: 34 k
Downloading Packages:
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : epel-release-8-17.el8.noarch 1/1
Running scriptlet: epel-release-8-17.el8.noarch 1/1
Many EPEL packages require the CodeReady Builder (CRB) repository.
It is recommended that you run /usr/bin/crb enable to enable the CRB repository.
Verifying : epel-release-8-17.el8.noarch 1/1
Installed products updated.
Installed:
epel-release-8-17.el8.noarch
Complete!
[root@slave ~]# sed -i 's|^#baseurl=https://download.example/pub|baseurl=https://mirrors.aliyun.com|' /etc/yum.repos.d/epel*
[root@slave ~]# sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel*
[root@slave ~]# yum -y install keepalived vim wget gcc gcc-c++
4.2 在主备机上分别安装nginx
在master上安装nginx
在slave上安装nginx
在浏览器上访问试试,确保locahost上的nginx服务能够正常访问
4.3 keepalived配置
4.3.1 配置主keepalived
[root@master ~]# cd /etc/keepalived/
[root@master keepalived]# vim keepalived.conf
! Configuration File for keepalived
global_defs {
router_id mr01
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.79.250
}
}
virtual_server 192.168.79.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.79.140 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.79.141 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@master keepalived]# systemctl restart keepalived.service
[root@master keepalived]# systemctl enable keepalived.service
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
4.3.2 配置备keepalived
[root@slave ~]# cd /etc/keepalived/
[root@slave keepalived]# cp keepalived.conf keepalived.conf.bak
[root@slave keepalived]# vim keepalived.conf
! Configuration File for keepalived
global_defs {
router_id mr02
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.79.250
}
}
virtual_server 192.168.79.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.79.140 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.79.141 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@slave keepalived]# systemctl restart keepalived.service
[root@slave keepalived]# systemctl enable keepalived.service
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
4.3.3 查看VIP在哪里
在master上查看
[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:f0:df:ce brd ff:ff:ff:ff:ff:ff
inet 192.168.79.140/24 brd 192.168.79.255 scope global dynamic noprefixroute ens33
valid_lft 1402sec preferred_lft 1402sec
inet 192.168.79.250/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::3cf0:13fe:3b43:3ccd/64 scope link noprefixroute
valid_lft forever preferred_lft forever
在slave上查看
[root@slave ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:b9:bd:f9 brd ff:ff:ff:ff:ff:ff
inet 192.168.79.141/24 brd 192.168.79.255 scope global dynamic noprefixroute ens33
valid_lft 1444sec preferred_lft 1444sec
inet6 fe80::9a33:ce06:9305:f665/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4.4 修改内核参数,开启监听VIP功能
此步可做可不做,该功能可用于仅监听VIP的时候
4.5 让keepalived监控nginx负载均衡机
keepalived通过控制nginx负载均衡的状态
在master上编写脚本
[root@master ~]# mkdir /scripts
[root@master ~]# cd /scripts/
[root@master scripts]# vim check.sh
#!/bin/bash
nginx_status=`ps -ef | grep -v "grep" | grep "nginx" | wc -l`
if [ $nginx_status -lt 1 ];then
systemctl stop keepalived
fi
[root@master scripts]# chmod +x check.sh
[root@master scripts]# vim notify.sh
#!/bin/bash
VIP=$2
sendmail () {
subject="${VIP}'s server keepalived state is translate"
content="`date +'%F %T'`: `hostname`'s state change to master"
echo $content | mail -s "$subject" 3215547886@qq.com
}
case "$1" in
master)
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -lt 1 ];then
systemctl start nginx
fi
sendmail
;;
backup)
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -gt 0 ];then
systemctl stop nginx
fi
;;
*)
echo "Usage:$0 master|backup VIP"
;;
esac
[root@master scripts]# chmod +x notify.sh
在slave上编写脚本
[root@slave~]# mkdir /scripts
[root@slave ~]# cd /scripts/
[root@slave scripts]# scp root@192.168.79.140:/scripts/check.sh .
The authenticity of host '192.168.79.140 (192.168.79.140)' can't be established.
ECDSA key fingerprint is SHA256:tGhqSo3u9wWuDd+55Xaw4UgDdkolb4FOhqKtT398KTA.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.29.140' (ECDSA) to the list of known hosts.
root@192.168.79.140's password:
check.sh 100% 144 111.3KB/s 00:00
[root@slave scripts]# scp root@192.168.79.140:/scripts/notify.sh .
root@192.168.79.140's password:
notify.sh 100% 605 555.5KB/s 00:00
[root@slave scripts]#
4.6 配置keepalived加入监控脚本的配置
配置主keepalived
[root@master ~]# vi /etc/keepalived/keepalived.conf
[root@master ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_script nginx_check {
script "/scripts/check_n.sh"
interval 1
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.91.250
}
track_script {
nginx_check
}
notify_master "/scripts/notify.sh master 192.168.91.250"
}
virtual_server 192.168.79.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.79.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.79.139 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@master ~]# systemctl restart keepalived.service
配置备keepalived
backup无需检测nginx是否正常,当升级为MASTER时启动nginx,当降级为BACKUP时关闭
[root@slave ~]# vi /etc/keepalived/keepalived.conf
[root@slave ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb02
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.91.250
}
notify_master "/scripts/notify.sh master 192.168.79.250"
notify_backup "/scripts/notify.sh backup 192.168.79.250"
}
virtual_server 192.168.79.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.79.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.79.139 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@slave ~]# systemctl restart keepalived
验证 当备升级为MASTER时自动启动nginx
[root@slave ~]# systemctl restart keepalived.service
[root@slave ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:33:c1:e3 brd ff:ff:ff:ff:ff:ff
inet 192.168.91.139/24 brd 192.168.91.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.91.250/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe33:c1e3/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@slave ~]# curl http://192.168.79.250
slave
[root@slave ~]# ss -anlt
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
5.keepalived实现haproxy高可用
环境说明
系统信息 | 主机名 | IP |
---|---|---|
Centos8 | master | 192.168.79.134 |
Centos8 | slave | 192.168.79.139 |
Centos8 | RS1 | 192.168.79.140 |
Centos8 | RS2 | 192.168.79.145 |
5.1keepalived安装
配置主keepalived
//关闭防火墙与SELINUX
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master ~]# setenforce 0
[root@master ~]# sed -ri 's/^(SELINUX=).*/\1disabled/g' /etc/selinux/config
//安装keepalived
//master
[root@master ~]# yum -y install keepalived
//查看安装生成的文件
[root@master ~]# rpm -ql keepalived
/etc/keepalived //配置目录
/etc/keepalived/keepalived.conf //此为主配置文件
/etc/sysconfig/keepalived
/usr/bin/genhash
/usr/lib/systemd/system/keepalived.service //此为服务控制文件
/usr/libexec/keepalived
/usr/sbin/keepalived
.....
用同样的方法在备服务器上安装keepalived
//关闭防火墙与SELINUX
[root@slave ~]# systemctl stop firewalld
[root@slave ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@slave ~]# setenforce 0
[root@slave ~]# sed -ri 's/^(SELINUX=).*/\1disabled/g' /etc/selinux/config
//安装keepalived
[root@slave ~]# yum -y install keepalived
5.2keepalived配置
配置主keepalived
[root@master ~]# cd /etc/keepalived/
[root@master keepalived]# ls
keepalived.conf
[root@master keepalived]# mv keepalived.conf keepalived.conf.bak
[root@master keepalived]# vi keepalived.conf
[root@master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.79.250
}
}
virtual_server 192.168.79.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.79.129 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.79.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@master keepalived]# systemctl start keepalived
[root@master keepalived]# systemctl enable keepalived
5.3配置备keepalived
[root@master keepalived]# scp keepalived.conf root@192.168.79.134:/etc/keepalived/
[root@slave ~]# cd /etc/keepalived/
[root@slave keepalived]# ls
keepalived.conf
[root@slave keepalived]# vi keepalived.conf
[root@slave keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb02
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.79.250
}
}
virtual_server 192.168.79.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.79.129 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.79.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@slave keepalived]# systemctl enable --now keepalived.service
查看VIP在哪里
在MASTER上查看
[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:b8:32:24 brd ff:ff:ff:ff:ff:ff
inet 192.168.79.129/24 brd 192.168.79.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.79.250/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:feb8:3224/64 scope link noprefixroute
valid_lft forever preferred_lft forever
在SLAVE上查看
[root@slave keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:70:9e:3b brd ff:ff:ff:ff:ff:ff
inet 192.168.79.134/24 brd 192.168.79.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe70:9e3b/64 scope link noprefixroute
valid_lft forever preferred_lft forever
5.4在主备服务器上分别搭建Haproxy,使用Haproxy搭建http负载均衡
RS1、RS2都关闭防火墙和selinux
//RS1和RS2部署httpd
[root@RS1 ~]# yum -y install httpd
[root@RS1 ~]# echo RS1 > /var/www/html/index.html
[root@RS1 ~]# systemctl restart httpd
[root@RS1 ~]# systemctl enable httpd
[root@RS2 ~]# yum -y install httpd
[root@RS2 ~]# echo RS2 > /var/www/html/index.html
[root@RS2 ~]# systemctl restart httpd
[root@RS2 ~]# systemctl enable httpd
//修改master 的内核参数
[root@master ~]# vim /etc/sysctl.conf
[root@master ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
//修改haproxy配置文件,根据自己的需求配置需要什么就配置什么
[root@master ~]# vi /etc/haproxy/haproxy.cfg
[root@master ~]# cat /etc/haproxy/haproxy.cfg
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
server web01 192.168.79.139:80
server web02 192.168.79.145:80
//启动haproxy服务
[root@master ~]# systemctl restart haproxy.service
//验证
[root@master ~]# curl http://192.168.79.129
RS1
[root@master ~]# curl http://192.168.79.129
RS2
[root@master ~]# curl http://192.168.79.129
RS1
[root@master ~]# curl http://192.168.79.129
在备服务器上
[root@slave ~]# vi /etc/haproxy/haproxy.cfg
[root@slave ~]# cat /etc/haproxy/haproxy.cfg
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
server web01 192.168.79.139:80
server web02 192.168.79.145:80
//启动haproxy服务
[root@slave~]# systemctl restart haproxy.service
//验证
[root@slave ~]# curl http://192.168.79.134
RS1
[root@slave ~]# curl http://192.168.79.134
RS2
//在主服务器上使用VIP访问测试
[root@master ~]# curl http://192.168.79.250
RS2
[root@master ~]# curl http://192.168.79.250
RS1
//当备服务器升级为MASTER时
[root@master ~]# systemctl stop keepalived.service
[root@slave haproxy-2.6.0]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:70:9e:3b brd ff:ff:ff:ff:ff:ff
inet 192.168.91.134/24 brd 192.168.91.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.91.250/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe70:9e3b/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@slave haproxy-2.6.0]# curl http://192.168.79.250
RS2
[root@slave haproxy-2.6.0]# curl http://192.168.79.250
RS1