keepalived+LVS高可用负载均衡高可用集群

本文介绍了keepalived,它基于VRRP协议实现高可用架构,与lvs、nginx等配合,有健康检查和失败切换两大核心功能。还阐述了VRRP协议原理。此外,详细说明了高可用负载集群的配置过程,包括主机部署、keepalived主机设置、配置文件修改等,并进行了故障模拟测试。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

一.keepalived的简单介绍
1.keepalived
keepalived是基于VRRP协议实现高可用架构,VRRP是路由灾备的实现基础。他与其他负载均衡技术例如lvs、nginx等一起工作达到集群高可用
keepalived作用是检验调度器的状态,在负载均衡时,所有的数据包请求均需要经过调度器进行调度转发。因此当调度器万一发生故障则整个集群系统都会崩溃,所以使用keeplived实现集群的高可用,部署两台或者多台调度器,仅一台作为主调度器,其他调度器作为备用,当主调度器发生故障时,keepalived可以自动将备用调度器升为主调度器,最终实现集群的高负载高可用
健康检查和失败切换是keepalived的两大核心功能。 所谓的健康检查, 就是采用tcp三次握手, icmp请求, http请求, udp echo请求等方式对负载均衡器后面的实际的服务器(通常是承载真实业务的服务器)进行保活;而失败切换主要是应用于配置了主备模式的负载均衡器, 利用VRRP 维持主备负载均衡器的心跳, 当主负载均衡器出现问题时, 由备负载均衡器承载对应的业务, 从而在最大限度上减少流量损失, 并提供服务的稳定性。
2.VRRP协议
VRRP全称 Virtual Router Redundancy Protocol,即虚拟路由冗余协议。它是实现路由器高可用的容错协议,即将N台提供相同功能的路由器组成一个路由器组(Router Group),这个组里面有一个master和多个backup,但在外界看来就像一台一样,构成虚拟路由器,拥有一个虚拟IP(vip,也就是路由器所在局域网内其他机器的默认路由),占有这个IP的master实际负责ARP相应和转发IP数据包,组中的其它路由器作为备份的角色处于待命状态。master会发组播消息,当backup在超时时间内收不到vrrp包时就认为master宕掉了,这时就需要根据VRRP的优先级来选举一个backup当master,保证路由器的高可用。
二.高可用负载集群的配置
1.实验主机部署
keepalived主机1:172.25.4.111
keepalived主机2:172.25.4.112
后端服务器1:172.25.4.113
后端服务器2:172.25.4.114
2.keeplived两台主机的设置
server1

[root@server1 ~]# cd /mnt
[root@server1 mnt]# ls
keepalived-2.0.6.tar.gz  ##源码包先编译再安装
[root@server1 mnt]# tar zxf keepalived-2.0.6.tar.gz  ##解压源码包
[root@server1 mnt]# ls
keepalived-2.0.6  keepalived-2.0.6.tar.gz
[root@server1 mnt]# cd keepalived-2.0.6
[root@server1 keepalived-2.0.6]# ls
aclocal.m4   compile       depcomp     keepalived          missing
ar-lib       configure     doc         keepalived.spec.in  README.md
AUTHOR       configure.ac  genhash     lib                 snap
bin_install  CONTRIBUTORS  INSTALL     Makefile.am         TODO
ChangeLog    COPYING       install-sh  Makefile.in
[root@server1 keepalived-2.0.6]# yum install gcc openssl-devel -y  ##安装对源码包进行编译的辅助软件
[root@server1 keepalived-2.0.6]# ./configure --prefix=/usr/local/keepalived --with-init=systemd  ##源码包进行编译
[root@server1 keepalived-2.0.6]# make && make install  ##源码包进行安装
[root@server1 keepalived-2.0.6]# yum install ipvsadm -y

server2

[root@server2 ~]# cd /mnt
[root@server2 mnt]# ls
keepalived-2.0.6.tar.gz
[root@server2 mnt]# tar zxf keepalived-2.0.6.tar.gz 
[root@server2 mnt]# ls
keepalived-2.0.6  keepalived-2.0.6.tar.gz
[root@server2 mnt]# cd keepalived-2.0.6
[root@server2 keepalived-2.0.6]# ls
aclocal.m4   compile       depcomp     keepalived          missing
ar-lib       configure     doc         keepalived.spec.in  README.md
AUTHOR       configure.ac  genhash     lib                 snap
bin_install  CONTRIBUTORS  INSTALL     Makefile.am         TODO
ChangeLog    COPYING       install-sh  Makefile.in
[root@server2 keepalived-2.0.6]# yum install gcc openssl-devel -y
[root@server2 keepalived-2.0.6]# ./configure --prefix=/usr/local/keepalived --with-init=systemd
[root@server2 keepalived-2.0.6]# make && make install
[root@server2 keepalived-2.0.6]# yum install ipvsadm -y  ##安装管理工具仅安装不编辑策略

3.设置链接方便访问keepalived配置文件
server1

[root@server1 keepalived-2.0.6]# cd /usr/local/keepalived/
[root@server1 keepalived]# ls
bin  etc  sbin  share
[root@server1 keepalived]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server1 keepalived]# ln -s /usr/local/keepalived/etc/keepalived/ /etc
[root@server1 keepalived]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/

server2

[root@server2 keepalived-2.0.6]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server2 keepalived-2.0.6]# ln -s /usr/local/keepalived/etc/keepalived/ /etc
[root@server2 keepalived-2.0.6]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/

4.keepalived配置文件的修改
server1

[root@server1 keepalived]# cd /etc/keepalived/
[root@server1 keepalived]# ls
keepalived.conf  samples
[root@server1 keepalived]# vim keepalived.conf 
1 ! Configuration File for keepalived
  2 
  3 global_defs {
  4    notification_email {
  5         root@localhost  ##节点宕机了将会接收到异常邮件的主机
  6    }
  7    notification_email_from keepalived@localhost  ##邮件发送人
  8    smtp_server 127.0.0.1  ##发送的服务器
  9    smtp_connect_timeout 30  #指定连接超时时间
 10    router_id LVS_DEVEL
 11    vrrp_skip_check_adv_addr
 12 #   vrrp_strict
 13    vrrp_garp_interval 0
 14    vrrp_gna_interval 0
 15 }
 16 
 17 vrrp_instance VI_1 {
 18     state MASTER  ##主节点
 19     interface eth0
 20     virtual_router_id 14
 21     priority 100  ##权重
 22     advert_int 1  ##检查间隔时间为1s
 23     authentication {
 24         auth_type PASS  ##认证方式
 25         auth_pass 1111  ##认证密码
 26     }
 27     virtual_ipaddress {
 28         172.25.4.100  ##虚拟ip
 29     }
 30 }
 31 
 32 virtual_server 172.25.4.100 80 {
33     delay_loop 6  ##连接失败6次以后发送邮件
 34     lb_algo rr  ##lvs的调度算法为rr
 35     lb_kind DR  ##lvs的调度方法为DR轮询
 36 #    persistence_timeout 50
 37     protocol TCP  ##端口
 38 
 39     real_server 172.25.4.113 80 {
 40         TCP_CHECK {
 41         weight 1
 42         connect_port 80
 43         connect_timeout 3
 44         }
 45     }
 46 
 47     real_server 172.25.4.114 80 {
 48         TCP_CHECK {
 49         weight 1
 50         connect_port 80
 51         connect_timeout 3
 52         }
 53     }
 54 }
 [root@server1 keepalived]# systemctl start keepalived
[root@server1 keepalived]# systemctl enable keepalived

server2

  1 ! Configuration File for keepalived
  2 
  3 global_defs {
  4    notification_email {
  5         root@localhost
  6    }
  7    notification_email_from keepalived@localhost
  8    smtp_server 127.0.0.1
  9    smtp_connect_timeout 30
 10    router_id LVS_DEVEL
 11    vrrp_skip_check_adv_addr
 12 #   vrrp_strict
 13    vrrp_garp_interval 0
 14    vrrp_gna_interval 0
 15 }
 16 
 17 vrrp_instance VI_1 {
 18     state BACKUP  ##主节点
 19     interface eth0
 20     virtual_router_id 14
 21     priority 50  ##权重小
 22     advert_int 1
 23     authentication {
 24         auth_type PASS
 25         auth_pass 1111
 26     }
 27     virtual_ipaddress {
 28         172.25.4.100
 29     }
 30 }
 31 
 32 virtual_server 172.25.4.100 80 {
 33     delay_loop 6
 34     lb_algo rr
 35     lb_kind DR
 36 #    persistence_timeout 50
 37     protocol TCP
 38 
 39     real_server 172.25.4.113 80 {
 40         TCP_CHECK {
 41         weight 1
 42         connect_port 80
 43         connect_timeout 3
 44         }
 45     }
 46 
 47     real_server 172.25.4.114 80 {
 48         TCP_CHECK {
 49         weight 1
 50         connect_port 80
 51         connect_timeout 3
 52           }
 53       }
 54 }
 [root@server2 keepalived]# systemctl start keepalived
[root@server2 keepalived]# systemctl enable keepalived

5.后端服务器的设置
(1)apache的安装
server3

[root@server3 ~]# yum install -y httpd
[root@server3 ~]# vim /var/www/html/index.html
[root@server3 ~]# cat /var/www/html/index.html
server3
[root@server3 ~]# systemctl start httpd
[root@server3 ~]# systemctl enable httpd

server4

[root@server4 ~]# yum install httpd -y
[root@server4 ~]# vim /var/www/html/index.html
[root@server4 ~]# cat /var/www/html/index.html
server4
[root@server4 ~]# systemctl start httpd
[root@server4 ~]# systemctl enable httpd

(2)虚拟vip的添加

[root@server3 ~]# ip addr add 172.25.4.100/24 dev eth0

[root@server4 ~]# ip addr add 172.25.4.100/24 dev eth0

(3)火墙策略的设置

[root@server3 ~]# yum install arptables -y
[root@server3 ~]# arptables -A INPUT -d 172.25.4.100 -j DROP  ##不回应对自己的请求
[root@server3 ~]# arptables -A OUTPUT -s  172.25.4.100 -j mangle --mangle-ip-s 172.25.4.113  ##将发送出去的数据包的源ip修改成VIP
[root@server3 ~]# arptables -L
Chain INPUT (policy ACCEPT)
-j DROP -d server3 

Chain OUTPUT (policy ACCEPT)
-j mangle -s server3 --mangle-ip-s server3 

Chain FORWARD (policy ACCEPT)

[root@server4 ~]# yum install arptables -y
[root@server4 ~]# arptables -A INPUT -d 172.25.4.100 -j DROP
[root@server4 ~]#  arptables -A OUTPUT -s  172.25.4.100 -j mangle --mangle-ip-s 172.25.4.114
[root@server4 ~]# arptables -L
Chain INPUT (policy ACCEPT)
-j DROP -d server4 

Chain OUTPUT (policy ACCEPT)
-j mangle -s server4 --mangle-ip-s server4 

Chain FORWARD (policy ACCEPT)

6.在两个调度器上查看生成的策略

[root@server1 keepalived]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  server1:http rr
[root@server1 keepalived]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  server1:http rr
  -> server3:http                 Masq    1      0          0         
  -> server4:http                 Masq    1      0          0   
        
[root@server2 keepalived]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.4.100:http rr
  -> server3:http                 Masq    1      0          0         
  -> server4:http                 Masq    1      0          0         

7.物理机测试

[root@foundation4 kiosk]# curl 172.25.4.100
server4
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# curl 172.25.4.100
server4
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# curl 172.25.4.100
server4
[root@foundation4 kiosk]# curl 172.25.4.100
server3

8.模拟故障
(1)当后端服务器中的其中一台httpd损坏

[root@server3 ~]# systemctl stop httpd

此时调度器状态

[root@server1 keepalived]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  server1:http rr
  -> server4:http                 Route   1      0          0           ##被关闭的服务器自动从调度器列表中移出
You have new mail in /var/spool/mail/root

物理机测试

[root@foundation4 kiosk]# curl 172.25.4.100
server4
[root@foundation4 kiosk]# curl 172.25.4.100
server4
[root@foundation4 kiosk]# curl 172.25.4.100
server4

(2)将损坏的后端服务器恢复

[root@server3 ~]# systemctl start httpd

此时调度器状态

[root@server1 keepalived]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  server1:http rr
  -> server3:http                 Route   1      0          0           ##恢复的后端服务器被自动重新加入策略中
  -> server4:http                 Route   1      0          4         
You have new mail in /var/spool/mail/root

物理机测试

[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# curl 172.25.4.100
server4
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# curl 172.25.4.100
server4
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# curl 172.25.4.100
server4
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值