DR模式
实验环境:
必须在同一个vlan(虚拟局域网)里
client:172.25.4.250
负载均衡:server1、server2
服务器:server3,server4
server1和server2都把双机热备的服务关了
以防影响实验结果
[root@server2 ~]# pcs cluster disable --all
server1: Cluster Disabled
server2: Cluster Disabled
[root@server2 ~]# pcs cluster stop --all
server1: Stopping Cluster (pacemaker)...
server2: Stopping Cluster (pacemaker)...
server2: Stopping Cluster (corosync)...
server1: Stopping Cluster (corosync)...
[root@server2 ~]# pcs status
Error: cluster is not currently running on this node
[root@server2 ~]# systemctl disable --now pcsd
Removed symlink /etc/systemd/system/multi-user.target.wants/pcsd.service.
[root@server1 ~]# systemctl disable --now pcsd
server3、server4相当于服务器
安装http并设置为开机自启
[root@server3 ~]# yum install httpd -y
[root@server4 ~]# yum install httpd -y
[root@server3 ~]# systemctl start httpd --now
[root@server3 ~]# systemctl enable httpd --now
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@server4 ~]# systemctl start httpd --now
[root@server4 ~]# systemctl enable httpd --now
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@server3 ~]# echo vm3 > /var/www/html/index.html
[root@server4 ~]# echo vm4 > /var/www/html/index.html
ipvsadm只是一个工具,类似于iptables,工作于linux内核,控制lvs内核
[root@server2 ~]# yum install ipvsadm -y
[root@server2 ~]# ipvsadm -A -t 172.25.4.100:80 -s rr
[root@server2 ~]# ipvsadm -a -t 172.25.4.100:80 -r 172.25.4.3:80 -g
[root@server2 ~]# ipvsadm -a -t 172.25.4.100:80 -r 172.25.4.4:80 -g
## -l:列出
## -s:调度策略
## -a:添加real server
## -r:real server
## -g:直链路由模式
[root@server2 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.4.100:80 rr
-> 172.25.4.3:80 Route 1 0 0
-> 172.25.4.4:80 Route 1 0 0
server2,server3,server4
ip addr add 172.25.4.100/24 dev eth0 #添加vip
client测试:
[root@server2 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.4.100:80 rr
-> 172.25.4.3:80 Route 1 0 5
-> 172.25.4.4:80 Route 1 0 6
server2,server3,server4都有vip,怎么知道命中的是哪个呢
[root@server2 ~]# arp -d 172.25.4.100 #server2删除vip
client可以直接访问RS,这样不安全
server3,server4
yum install -y arptables ## arptable是arp防火墙,只对arp有效
arptables -A INPUT -d 172.25.4.100 -j DROP
# baserver3,server4 input链的vip删掉,这样client访问时就无法直接访问RS
[root@server3 ~]# arptables -A OUTPUT -s 172.25.4.100 -j mangle --mangle-ip-s 172.25.4.3
[root@server4 ~]# arptables -A OUTPUT -s 172.25.4.100 -j mangle --mangle-ip-s 172.25.4.4
# 将server3,server4 output 链出去的ip设置成RID,因为全是vip的话会发生冲突
[root@server4 ~]# arptables-save
*filter
:INPUT ACCEPT
:OUTPUT ACCEPT
:FORWARD ACCEPT
-A INPUT -j DROP -d 172.25.4.100
-A OUTPUT -j mangle -s 172.25.4.100 --mangle-ip-s 172.25.4.4
-A OUTPUT -j mangle -s 172.25.4.100 --mangle-ip-s 172.25.4.3
[root@server4 ~]# arptables-save > /etc/sysconfig/arptables
测试:
[root@foundation4 ~]# arp -n |grep 100
172.25.4.100 ether 52:54:00:42:b0:86 C br0
[root@server2 ~]# ip addr eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:42:b0:86 brd ff:ff:ff:ff:ff:ff
inet 172.25.4.2/24 brd 172.25.4.255 scope global eth0
valid_lft forever preferred_lft forever
inet 172.25.4.100/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe42:b086/64 scope link
valid_lft forever preferred_lft forever
[root@server2 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.4.100:80 rr
-> 172.25.4.3:80 Route 1 0 2
-> 172.25.4.4:80 Route 1 0 2
DR:直接甩过去,不检查连接是否合法,快,不支持端口转发,必须在同一个vlan
如果server3down了,调度器无法检测,还会继续负载均衡 +健康检查keeplived
lvs本身down了—>lvs冗余 主备冗余haertbeat
NAT:来回都要经过负载均衡器,可能会成为性能瓶颈,不用在同一个vlan
隧道:必须使用公网,可以不在同一个地址,一个在北京,一个在上海
在rs上加vip,风险大,别人如果知道你的vip可以直接访问,必须支持隧道协议,ip:ip
都不常用
DNS:抗攻击性查,如果有个rs坏了,他不管
测试:
[root@server3 ~]# systemctl stop httpd
server3挂了应该直接分配给server4
但是它没有健康检查,不知道server3挂了
keepalived
keepalived,顾名思义:保持存活,保持在线,也就是所谓的高可用或热备,用来防止单点故障的发生,Keepalived通过请求一个vip来达到请求真实IP地址的功能,而VIP能够在一台机器发生故障时候,自动漂移到另外一台机器上,从而达到了高可用HA功能。
- Keepalived是用C语言编写的路由软件
- Keepalived软件起初是为LVS负载均衡设计的软件,专门用来管理并监控LVS集群系统中各个服务节点的状态。起初是根据TCP/IP参考模型的3、4、5层交换机制,来检测每个服务器的节点状态;如果某个服务器出现异常,或者工作出现故障,keepalived将检测到,并将出现故障的服务器节点从集群系统中剔除,keepalived完成的这些工作都是自动完成的,不需要人为干预,需要人工完成的只是修复出现故障的服务节点。
- 后来又加入了可以实现高可用的VRRP功能。Keepalived软件主要是通过VRRP协议实现高可用功能的,VRRP【Virtual Router Redundancy Protocol】即虚拟路由器冗余协议的缩写。VRRP出现的目的就是为了解决静态路由单点故障问题的,通过VRRP可以实现网络不间断稳定运行。因此keepalived通过添加VRRP功能,一方面本身具有服务器状态检测和故障隔离的功能,另外一方面也具有了高可用集群的功能。
安装配置keepalived
server1,server2 安装健康检查
yum install keepalived -y
[root@server2 ~]# ipvsadm -C #-C:清空ipvs转发表
[root@server2 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
配置keepalived文件
Keepalived配置文件详解:
Keepalived的所有配置都在一个配置文件里面设置,支持的配置项主要分三类:
1.全局配置(Global Configuration):作用于整个keepalived服务
2.VRRPD配置:keepalived的核心
3.虚拟服务配置:指定服务与负载均衡
配置文件都是以块(block)形式组织的,每个块都在{ }包围的范围内
[root@server1 ~]# cd /etc/keepalived/
[root@server1 keepalived]# ls
keepalived.conf
[root@server1 keepalived]# vim keepalived.conf
全局定义
-------
3 global_defs { #全局配置标识,表明这个区域{}是全局配置
4 notification_email {
5 root@localhost #表示发送通知邮件给本机
6 }
7 notification_email_from keepalived@localhost #表示keepalived在发生诸如切换操作时需要发送email通知,以及email发送给哪些邮件地址,邮件地址可以多个,每行一个
8 smtp_server 127.0.0.1 #表示发送email时使用的smtp服务器地址
9 smtp_connect_timeout 30 #连接smtp连接超时时间
10 router_id LVS_DEVEL #router_id运行keepalived的机器的一个标识
11 vrrp_skip_check_adv_addr
12 #vrrp_strict
13 vrrp_garp_interval 0
14 vrrp_gna_interval 0
15 }
VRRPD配置
-------
16 vrrp_instance VI_1 {
##state指定instance的初始化状态,在两台router都启动后,马上会发生竞选
##高priority的会竞选为Master,因而这里的state并不表示这台就一直是Master
17 state MASTER # 指定该节点为主节点,备用节点设置为BACKUP
18 interface eth0 # 绑定虚拟IP的网络接口
19 virtual_router_id 51 # 设置验证信息,两个节点需一致
20 priority 100 # 主节点的优先级,数值在1~254,注意从节点必须比主节点的优先级别低
21 advert_int 1 # 组播信息发送间隔,两个节点需一致
22 authentication {
23 auth_type PASS
24 auth_pass 1111
25 }
26
27 virtual_ipaddress {
28 172.25.4.100 # 指定虚拟IP,两个节点需设置一样
29 }
虚拟服务配置
------
32 virtual_server 172.25.4.100 80 {
33 delay_loop 6 # 指定检查间隔
34 lb_algo rr # 指定lvs算法,rr:轮训
35 lb_kind DR # 指定lvs模式为DR
36 #persistence_timeout 50 # 持久连接设置,会话保持时间,在此处需要注释
37 protocol TCP # 指定转发协议为TCP协议
38
## 后端实际TCP服务配置
39 real_server 172.25.4.3 80 {
40 weight 1
41 TCP_CHECK {
42 connect_timeout 3
43 nb_get_retry 3
44 delay_before_retry 3
45 }
46 }
47
47行以后全删掉,复制39-47,粘贴
48
49 real_server 172.25.4.4 80 {
50 weight 1
51 TCP_CHECK {
52 connect_timeout 3
53 nb_get_retry 3
54 delay_before_retry 3
55 }
56 }
57 }
[root@server1 keepalived]# scp keepalived.conf server2:/etc/keepalived/
keepalived.conf 100% 1032 1.7MB/s 00:00
[root@server2 ~]# vim /etc/keepalived/keepalived.conf
18 state BACKUP # 备机
21 priority 50 # 权重50
server1,server2
systemctl start keepalived
lvs(DR模式) +keepalived实现高可用负载均衡
[root@server1 keepalived]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.4.100:80 rr
-> 172.25.4.3:80 Route 1 0 0
-> 172.25.4.4:80 Route 1 0 0
[root@server1 keepalived]# ip addr
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:5a:95:fe brd ff:ff:ff:ff:ff:ff
inet 172.25.4.1/24 brd 172.25.4.255 scope global eth0
valid_lft forever preferred_lft forever
inet 172.25.4.100/24 scope global secondary eth0 # vip在server1上
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe5a:95fe/64 scope link
valid_lft forever preferred_lft forever
server1,server2
yum install mailx -y
测试:
网络故障
sevrer1删掉vip
[root@server1 keepalived]# ip addr del 172.25.4.100/24 dev eth0
[root@server1 keepalived]# ip addr
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:5a:95:fe brd ff:ff:ff:ff:ff:ff
inet 172.25.4.1/24 brd 172.25.4.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe5a:95fe/64 scope link
valid_lft forever preferred_lft forever
测试正常
[root@foundation4 ~]# arp -n |grep 100
172.25.4.100 ether 52:54:00:42:b0:86 C br0
[root@server2 ~]# ip addr # vip飘到了server2上
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:42:b0:86 brd ff:ff:ff:ff:ff:ff
inet 172.25.4.2/24 brd 172.25.4.255 scope global eth0
valid_lft forever preferred_lft forever
inet 172.25.4.100/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe42:b086/64 scope link
valid_lft forever preferred_lft forever
[root@server2 ~]# ip addr del 172.25.4.100/24 dev eth0
server1,server2
ip addr add 172.25.4.100/24 dev eth0
[root@server2 ~]# ps ax| grep keepalived
4468 ? Ss 0:00 /usr/sbin/keepalived -D #主进程,监控线程
4469 ? S 0:00 /usr/sbin/keepalived -D #高可用
4470 ? S 0:04 /usr/sbin/keepalived -D #健康检查
4513 pts/0 R+ 0:00 grep --color=auto keepalived
节点故障
使server1内核崩溃
echo c > /proc/sysrq-trigger
server2就会变为MASTER,继续提供服务,测试端不会停止。当server1恢复正常后,server1立即竞选变为MASTER,servere2恢复为BACKUP