一.负载均衡简单介绍
1.常见的负载均衡器
(1)根据工作的协议层可划分为:
四层负载均衡(位于内核层):根据请求报文中的目标地址和端口进行调度
七层负载均衡(位于应用层):根据请求报文的内容进行调度,这种调度属于“代理“的方式,如varnish
(2)根据软硬件划分为:
硬件负载均衡:
F5 的 BIG-IP;Citrix 的 NetScaler
软件负载均衡:
a.TCP 层:LVS,HaProxy,Nginx;
b.基于 HTTP协议:Haproxy,Nginx,ATS(Apache Traffic Server),squid,varnish;
c.基于 MySQL 协议:mysql-proxy
二.LVS简单介绍
1.LVS体系结构及工作模式
全称:Linux Virtual Server即Linux虚拟服务器
由我国章文嵩博士主导开发的开源负载均衡项目,目前LVS已经被集成到Linux内核模块中,在Linux内核中实现了基于IP的数据请求负载均衡方案;属于四层负载均衡(位于内核层):根据请求报文中的目标地址和端口进行调度
体系结构如图:
终端互联网用户从外部访问公司外部负载均衡服务器,终端用户的请求会发送给LVS调度器,调度器根据自己预设的算法决定将该请求发送给后端的某台Web服务器,比如,轮询算法可以将外部的请求平均分发给后端的所有服务器,终端用户访问LVS调度器,虽然会被转发到后端的真实服务器,但是如果真实服务器连接的是相同的存储,提供的服务也是形同的服务,最终用户不管是访问哪台真实服务端得到的服务内容是一样的,整个集群对客户而言是透明的。
根据LVS工作模式的不同,真实服务器会选择不同的方式将客户需要的数据发送给终端客户,LVS工作模式分为NAT模式、TUN模式、DR模式。
2.LVS负载均衡调度算法
(1)轮询调度
按依次循环的方式将请求调度到不同的服务器上,最大的特点就是简单。轮询算法是假设所有的服务器处理请求的能力是一样的,调度器会将所有的请求平均分配给每一个真实的服务器
(2)加权轮询调度
是对轮询算法的一种优化与补充,LVS会考虑每台服务器的特性,并给每台服务器添加一个权值,权值越高的服务器,处理的请求越高
(3)最小连接调度
将请求调度连接到数量最小的服务器上
(4)加权最小连接调度
给每个服务器一个权值,调度器会尽可能保持服务器连接数量与权值之间的平衡
(5)基于局部性最少的连接
(6)带复制的基于局部性的最少连接
(7)目标地址散列调度
根据目标IP地址通过散列函数将目标IP与服务器建立映射关系,出现服务器不可用或负载过高的情况下,发往该目标IP的请求会固定发送给该服务器
(8)源地址散列调度
三.LVS搭建部署
1.DR模式
(1)LVS调度器的搭建
下载LVS管理工具ipvsadm
[root@server1 ~]# yum install ipvsadm.x86_64 -y
(2)设置LVS策略
设置以轮询方式所有访问172.25.4.100的80端口的请求,最终被调度器通过NAT方式转发给172.25.4.112和172.25.4.113这两台主机的80端口
[root@server1 ~]# ipvsadm -l ##查看ipvs调度状态
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@server1 ~]# ipvsadm -A -t 172.25.4.100:80 -s rr ##添加策略
[root@server1 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.4.100:http rr
[root@server1 ~]# ipvsadm -a -t 172.25.4.100:80 -r 172.25.4.112:80 -g ##添加访问的后端服务器
[root@server1 ~]# ipvsadm -a -t 172.25.4.100:80 -r 172.25.4.113:80 -g
[root@server1 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.4.100:http rr
-> server2:http Route 1 0 0
-> server3:http Route 1 0 0
(3)虚拟ip的添加及后端服务器的配置
调度器
[root@server1 ~]# ip addr add 172.25.4.100/24 dev eth0
[root@server1 ~]# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:a8:e6:55 brd ff:ff:ff:ff:ff:ff
inet 172.25.4.111/24 brd 172.25.4.255 scope global eth0
valid_lft forever preferred_lft forever
inet 172.25.4.100/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fea8:e655/64 scope link
valid_lft forever preferred_lft forever
后端服务器
[root@server2 ~]# yum install httpd -y
[root@server2 ~]# systemctl start httpd
[root@server2 ~]# systemctl enable httpd
[root@server2 ~]# cd /var/www/html
[root@server2 html]# ls
[root@server2 html]# vim index.html
[root@server2 html]# cat index.html
server2
[root@server2 html]# ip addr add 172.25.4.100/24 dev eth0
[root@server3 ~]# yum install httpd -y
[root@server3 ~]# systemctl start httpd
[root@server3 ~]# systemctl enable httpd
[root@server3 ~]# cd /var/www/html
[root@server3 html]# ls
[root@server3 html]# vim index.html
[root@server3 html]# cat index.html
server3
[root@server3 html]# ip addr add 172.25.4.100/24 dev eth0
(4)物理机的访问
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# curl 172.25.4.100
server2
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# arp -an|grep 172.25.4.100
? (172.25.4.100) at 52:54:00:a8:e6:55 [ether] on br0 ##缓存的是VS的MAC地址
清除所有缓存再次访问
[root@foundation4 kiosk]# arp -d 172.25.4.100
[root@foundation4 kiosk]# curl 172.25.4.100
server2
[root@foundation4 kiosk]# curl 172.25.4.100
server2
[root@foundation4 kiosk]# curl 172.25.4.100
server2
[root@foundation4 kiosk]# curl 172.25.4.100
server2
[root@foundation4 kiosk]# arp -an|grep 172.25.4.100
? (172.25.4.100) at 52:54:00:91:da:8e [ether] on br0 ##缓存的是后端服务器的MAC地址
因为DR模式是要求调度器与后端服务器必须在一个局域网内,VIP地址需要在调度器与后端所有服务器间进行共享,因此最终的真实服务器给客户端回应的数据包时需要设置源ip为VIP地址,目标ip为客户端ip,这样客户端访问的是调度器的VIP地址,回应的源地址也应该是该VIP地址。
所以我们可以用arptables防火墙来阻止外部访问100
对后端服务器添加arptables策略
[root@server2 network-scripts]# yum install arptables.x86_64 -y
[root@server2 network-scripts]# arptables -A INPUT -d 172.25.4.100 -j DROP
[root@server2 network-scripts]# arptables -A OUTPUT -s 172.25.4.100 -j mangle --mangle-ip-s 172.25.4.112
[root@server2 network-scripts]# arptables -L
Chain INPUT (policy ACCEPT)
-j DROP -d server2
Chain OUTPUT (policy ACCEPT)
-j mangle -s 172.25.4.100 --mangle-ip-s server2
Chain FORWARD (policy ACCEPT)
[root@server3 html]# arptables -A INPUT -d 172.25.4.100 -j DROP
[root@server3 html]# arptables -A OUTPUT -s 172.25.4.100 -j mangle --mangle-ip-s 172.25.4.113
[root@server3 html]# arptables -L
Chain INPUT (policy ACCEPT)
-j DROP -d server3
Chain OUTPUT (policy ACCEPT)
-j mangle -s server3 --mangle-ip-s server3
Chain FORWARD (policy ACCEPT)
物理机进行测试
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# curl 172.25.4.100
server2
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# curl 172.25.4.100
server2
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# arp -an|grep 172.25.4.100
? (172.25.4.100) at 52:54:00:a8:e6:55 [ether] on br0
[root@foundation4 kiosk]# arp -d 172.25.4.100
[root@foundation4 kiosk]# curl 172.25.4.100
server2
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# curl 172.25.4.100
server2
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# arp -an|grep 172.25.4.100
? (172.25.4.100) at 52:54:00:a8:e6:55 [ether] on br0
2.TUN隧道模式
(1)调度器配置
注意:删除之前实验的VIP及策略保持为空
[root@server1 ~]# modprobe ipip ##导入隧道模块
[root@server1 ~]# ip addr add 172.25.4.100/24 dev tunl0 ##添加隧道ip
[root@server1 ~]# ip link set up tunl0 ##激活tunl0
[root@server1 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:a8:e6:55 brd ff:ff:ff:ff:ff:ff
inet 172.25.4.111/24 brd 172.25.4.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fea8:e655/64 scope link
valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
inet 172.25.4.100/24 scope global tunl0
valid_lft forever preferred_lft forever
[root@server1 ~]# ipvsadm -A -t 172.25.4.100:80 -s rr ##添加策略
[root@server1 ~]# ipvsadm -a -t 172.25.4.100:80 -r 172.25.4.112:80 -i ##添加访问的后端服务器
[root@server1 ~]# ipvsadm -a -t 172.25.4.100:80 -r 172.25.4.113:80 -i
[root@server1 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP server1:http rr
-> server2:http Tunnel 1 0 0
-> server3:http Tunnel 1 0 0
(2)后端服务器配置
注意:两个后端服务器均需要删除之前的策略及VIP
后端服务器ip设置
a.server2
[root@server2 ~]# modprobe ipip
[root@server2 ~]# ip addr add 172.25.4.100/24 dev tunl0
[root@server2 ~]# ip link set up tunl0
[root@server2 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:f6:35:64 brd ff:ff:ff:ff:ff:ff
inet 172.25.4.112/24 brd 172.25.4.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fef6:3564/64 scope link
valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
inet 172.25.4.100/24 scope global tunl0
valid_lft forever preferred_lft forever
b.server3
[root@server3 ~]# modprobe ipip
[root@server3 ~]# ip addr add 172.25.4.100/24 dev tunl0
[root@server3 ~]# ip link set up tunl0
[root@server3 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:39:ba:1b brd ff:ff:ff:ff:ff:ff
inet 172.25.4.113/24 brd 172.25.4.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe39:ba1b/64 scope link
valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
inet 172.25.4.100/24 scope global tunl0
valid_lft forever preferred_lft forever
后端服务器反向机制关闭
a.server2
[root@server2 ~]# sysctl -a|grep rp_filter
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.eth0.arp_filter = 0
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0
net.ipv4.conf.tunl0.arp_filter = 0
net.ipv4.conf.tunl0.rp_filter = 1
[root@server2 ~]# sysctl -w net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.all.rp_filter = 0
[root@server2 ~]# sysctl -w net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.rp_filter = 0
[root@server2 ~]# sysctl -w net.ipv4.conf.eth0.rp_filter=0
net.ipv4.conf.eth0.rp_filter = 0
[root@server2 ~]# sysctl -w net.ipv4.conf.tunl0.rp_filter=0
net.ipv4.conf.tunl0.rp_filter = 0
[root@server2 ~]# sysctl -a|grep rp_filter
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.eth0.arp_filter = 0
net.ipv4.conf.eth0.rp_filter = 0
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0
net.ipv4.conf.tunl0.arp_filter = 0
net.ipv4.conf.tunl0.rp_filter = 0
[root@server2 ~]# sysctl -p
b.server3
[root@server3 ~]# sysctl -a|grep rp_filter
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.eth0.arp_filter = 0
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0
net.ipv4.conf.tunl0.arp_filter = 0
net.ipv4.conf.tunl0.rp_filter = 1
[root@server3 ~]# sysctl -w net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.all.rp_filter = 0
[root@server3 ~]# sysctl -w net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.rp_filter = 0
[root@server3 ~]# sysctl -w net.ipv4.conf.eth0.rp_filter=0
net.ipv4.conf.eth0.rp_filter = 0
[root@server3 ~]# sysctl -w net.ipv4.conf.tunl0.rp_filter=0
net.ipv4.conf.tunl0.rp_filter = 0
[root@server3 ~]# sysctl -a|grep rp_filter
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.eth0.arp_filter = 0
net.ipv4.conf.eth0.rp_filter = 0
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0
net.ipv4.conf.tunl0.arp_filter = 0
net.ipv4.conf.tunl0.rp_filter = 0
[root@server3 ~]# sysctl -p
(3)物理机测试
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# curl 172.25.4.100
server2
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# curl 172.25.4.100
server2
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# curl 172.25.4.100
server2
[root@foundation4 kiosk]# curl 172.25.4.100
server3
[root@foundation4 kiosk]# curl 172.25.4.100
server2
(4)查看调度器策略
[root@server1 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP server1:http rr
-> server2:http Tunnel 1 0 4
-> server3:http Tunnel 1 0 4
3.NAT模式
(1)作用
即网络地址转换,作用是通过数据报头的修改,使位于企业内部的私有IP可以访问外网,以及外部的的用户可以访问位于公司内部的私有IP主机
(2)原理
第一步是用户通过访问LVS的VIP即外网来和真实主机进行连接,但客户并不知道自己访问的vip仅是一个调度器,用户并不清除后端服务器的真实详细信息
第二步是用户将请求发送给调度器 ,此时LVS会根据预设的算法选择后端的服务器,将数据请求包转发给真实服务器,并且在转发前会修改数据包的目的地址及目标端口,目标地址及端口会修改为后端被选出的真实服务器IP地址及相应的端口
第三步真实的服务器将响应数据包返回给LVS调度器,调度器在得到响应的数据包后会将源地址和源端口修改为VIP及调度器相应的端口,修改完成后,由调度器将响应数据包发送回终端用户,另外,由于LVS调度器有一个连接Hash表,该表中会记录连接请求及转发信息,当同一个连接的下一个数据包发送给调度器时,从该Hash表中可以直接找到之前的连接记录,并根据记录信息选出相同的真实服务器及端口信息
(3)LVS的NAT模式的实现
LVS调度器设置
添加vip
[root@server1 ~]# ip addr add 172.25.254.100/24 dev eth0 ##vip与内网ip不在同一网段,与客户端在同一网段
[root@server1 ~]# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:ea:2b:be brd ff:ff:ff:ff:ff:ff
inet 172.25.4.111/24 brd 172.25.4.255 scope global eth0
valid_lft forever preferred_lft forever
inet 172.25.254.100/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:feea:2bbe/64 scope link
valid_lft forever preferred_lft forever
安装LVS管理工具
[root@server1 ~]# yum install ipvsadm -y
添加策略及后端服务器
[root@server1 ~]# ipvsadm -A -t 172.25.254.100:80 -s rr
[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.4.112:80 -m
[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.4.113:80 -m
[root@server1 ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP server1:http rr
-> server2:http Masq 1 0 0
-> server3:http Masq 1 0 0
开启调度器的内核路由功能
[root@server1 ~]# sysctl -a | grep ip_forward
net.ipv4.ip_forward = 0
net.ipv4.ip_forward_use_pmtu = 0
[root@server1 ~]# sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
[root@server1 ~]# sysctl -a | grep ip_forward
net.ipv4.ip_forward = 1
net.ipv4.ip_forward_use_pmtu = 0
[root@server1 ~]# modprobe iptable_nat
加载模块
目的:让第一次访问时回应时间变短
[root@server1 ~]# modprobe iptable_nat
两台后端服务器的设置
均安装httpd服务和设置网关为调度器的内网IP
[root@server2 ~]# yum install httpd -y
[root@server2 ~]# vim /var/www/html/index.html
[root@server2 ~]# cat /var/www/html/index.html
server2
[root@server2 ~]# systemctl start httpd
[root@server2 ~]# systemctl enable httpd
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@server2 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
[root@server2 ~]# systemctl restart network
[root@server2 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.25.4.111 0.0.0.0 UG 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
172.25.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
实验测试
[root@foundation4 kiosk]# curl 172.25.254.100
server3
[root@foundation4 kiosk]# curl 172.25.254.100
server2
[root@foundation4 kiosk]# curl 172.25.254.100
server3
[root@foundation4 kiosk]# curl 172.25.254.100
server2
[root@foundation4 kiosk]# curl 172.25.254.100
server3
此时调度器状态
[root@server1 ~]# ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP server1:http rr
-> server2:http Masq 1 0 2
-> server3:http Masq 1 0 3
四.LVS负载集群的实现
LVS+heartbeat+ldirectord实现集群负载
1.heartbeat
提供了心跳检测及资源接管两个核心功能,心跳监测可以通过网络链路和串口进行,而且支持冗余链路,安装了 Heartbeat 的两台机器会通过心跳检测互相检测对方的状态,当检测到对方失效的时候会调用资源接管来做接管服务器,保证高可靠性。
2.ldirectord
用来监控lvs架构中后端服务器的状态,运行在 IPVS 节点上,用于监控后端服务器接收到请求后的状态,如果 服务器没有响应 ldirectord 的请求,那么ldirectord 认为该服务器不可用, ldirectord 会运行 ipvsadm 对 IPVS表中该服务器进行删除,如果等下次再次检测有相应则通过ipvsadm 进行添加。
3.DR模式下进行实验
(1)调度器yum源的配置添加高可用
[root@server1 yum.repos.d]# vim westos.repo
[root@server1 yum.repos.d]# cat westos.repo
[rhel7.3]
name=rhel7.3
baseurl=http://172.25.4.250/westos
gpgcheck=0
[HighAvailability]
name=HighAvailability
baseurl=http://172.25.4.250/westos/addons/HighAvailability
gpgcheck=0
[root@server1 yum.repos.d]# yum clean all
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Cleaning repos: HighAvailability rhel7.3
Cleaning up everything
[root@server1 yum.repos.d]# yum repolist
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
HighAvailability | 4.1 kB 00:00
rhel7.3 | 4.1 kB 00:00
(1/4): HighAvailability/group_gz | 3.4 kB 00:00
(2/4): HighAvailability/primary_db | 27 kB 00:00
(3/4): rhel7.3/group_gz | 136 kB 00:00
(4/4): rhel7.3/primary_db | 3.9 MB 00:00
repo id repo name status
HighAvailability HighAvailability 37
rhel7.3 rhel7.3 4,751
repolist: 4,788 ##可安装软件数增多,之前为4751
(2)ldirectord的安装及依赖性解决
[root@server1 mnt]# ls
ldirectord-3.9.5-3.1.x86_64.rpm
[root@server1 mnt]# yum install -y ldirectord-3.9.5-3.1.x86_64.rpm
[root@server1 mnt]# rpm -qpl ldirectord-3.9.5-3.1.x86_64.rpm
warning: ldirectord-3.9.5-3.1.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 7b709911: NOKEY
/etc/ha.d
/etc/ha.d/resource.d
/etc/ha.d/resource.d/ldirectord
/etc/init.d/ldirectord
/etc/logrotate.d/ldirectord
/usr/lib/ocf/resource.d/heartbeat/ldirectord
/usr/sbin/ldirectord
/usr/share/doc/ldirectord-3.9.5
/usr/share/doc/ldirectord-3.9.5/COPYING
/usr/share/doc/ldirectord-3.9.5/ldirectord.cf
/usr/share/man/man8/ldirectord.8.gz
(3)配置文件的修改
[root@server1 mnt]# cd /etc/ha.d
[root@server1 ha.d]# ls
resource.d shellfuncs
[root@server1 ha.d]# cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf /etc/ha.d/
[root@server1 ha.d]# ls
ldirectord.cf resource.d shellfuncs
[root@server1 ha.d]# vim ldirectord.cf
checktimeout=3 #后端服务器健康检查等待时间
checkinterval=1 #两次检查间隔时间
autoreload=yes #自动添加或者移除真实服务器
quiescent=no #故障时移除服务器的时候中断所有连接
virtual=172.25.4.100:80 ## VIP
real=172.25.4.112:80 gate # 添加真实服务器
real=172.25.4.113:80 gate # 添加真实服务器
fallback=127.0.0.1:80 gate # 添加本机作为备用,当所有的真实服务器异常时候使用
service=http # 指定服务
scheduler=rr # 指定调度算法
protocol=tcp # 端口
checktype=negotiate # 健康检查方式
checkport=80 # 检查的端口
request="index.html"
[root@server1 ha.d]# systemctl restart ldirectord
(4)物理机的测试
模拟1:server2的http服务关闭
[root@foundation4 mnt]# curl 172.25.4.100
server3
[root@foundation4 mnt]# curl 172.25.4.100
server3
[root@foundation4 mnt]# curl 172.25.4.100
server3
[root@foundation4 mnt]# curl 172.25.4.100
server3
[root@foundation4 mnt]# curl 172.25.4.100
server3
调度器显示
[root@server1 ha.d]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP server1:http rr
-> server3:http Route 1 0 5
模拟2:server3的http服务关闭
[root@foundation4 mnt]# curl 172.25.4.100
server2
[root@foundation4 mnt]# curl 172.25.4.100
server2
[root@foundation4 mnt]# curl 172.25.4.100
server2
[root@foundation4 mnt]# curl 172.25.4.100
server2
调度器显示
[root@server1 ha.d]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP server1:http rr
-> server2:http Route 1 0 4