使用lvs实现负载均衡原理以及配置详解上(lVS/DR模式)

本文详细介绍了使用LVS/DR模式实现负载均衡的原理和配置步骤,包括添加虚拟IP、配置调度算法、解决偶然性问题、健康检查以及通过keepalived实现高可用集群。在实验过程中,通过关闭后端服务器的httpd服务验证了负载均衡和故障切换的效果。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1. LVS-DR模型的特性:

lvS/DR 利用大多数 Internet 服务的非对称特点,负载调度器中只负责调度请求,而服务器直接将响应返回给客户,可以极大地提高
整个集群系统的吞吐量调度器和服务器组都必须在物理上有一个网卡通过不分断的局域网相连,如通过交换机或者高速的HUB
相连。VIP 地址为调度器和服务器组共享,调度器配置的 VIP 地址是对外可见的,用于接收虚拟服务的请求报文;所有的服务器把
VIP 地址配置在各自的 Non­ARP 网络设备上,它对外面是不可见的,只是用于处 理目标地址为 VIP 的网络请求。 

2.LVS相关术语:

1. DS:Director Server。指的是前端负载均衡器节点。
2. RS:Real Server。后端真实的工作服务器。
3. VIP:向外部直接面向用户请求,作为用户请求的目标的IP地址。
4. DIP:Director Server IP,主要用于和内部主机通讯的IP地址。
5. RIP:Real Server IP,后端服务器的IP地址。
6. CIP:Client IP,访问客户端的IP地址。

3.LVS 的负载调度算法:

1.RR:轮叫调度算法就是以轮叫的方式依次将外部请求按顺序分配到集群中的真实服务器。
2.WRR:加权轮叫调度算法,自动问询真实服务器的情况,并动态调整其权值。
3.LC:最小连接调度算法是把新的连接请求分配到当前连接数最小的服务器。
4.WLC:加权最小连接调度算法是自动问询真实服务器情况并动态调整其权值。
5.基于局部性的最少链接调度算法是针对请求报文的目标 IP 地址的负载均衡调度,目前主要用于Cache集群系统,
因为在 Cache集群中客户请求报文的目标 IP地址是变化的。
6.带复制的基于局部性最少链接算法也是针对目标 IP 地址的负载均衡,目前主要用于 Cache集群系统。
7.DH:目标地址散列调度算法,根据目标地址从散列表中找出对应服务器,若该服务器可用且未超载,将请求发送到该服务器。
8.SH:源地址散列调度算法,根据源地址从散列表中找出对应服务器,若该服务器可用且未超载,将请求发送到该服务器。

4.负载均衡的工作模式:

基于IP的负载均衡模式中,常见的有以下三种工作模式:
(1)地址转换,简称NAT模式,负载均衡调度器作为网关,服务器和负载调度器在同一个私有网络,安全性较好。
(2)IP隧道,简称TUN模式,负载调度器仅作为客户机的访问入口,各节点通过各自的Internet连接直接回应客户机,不在经过
负载调度器,服务器的节点分散在互联网的不同位置,具有独立的共有IP地址,通过专用的IP隧道与负载调度器相互通信。
(3)直接路由,简称DR模式,与TUN模式类似,但各节点不是分散在各地,而是与调度器位于同一个物理网络,负载调度器
与各节点服务器通过本地网络连接,不需要建立专用的ip隧道。

4.1lVS/DR模式:

首先搭建环境:

[root@server1 varnish]# /etc/init.d/varnish stop 关闭varnish
[root@server1 varnish]# /etc/init.d/httpd stop 关闭httpd
[root@server1 varnish]# cd
[root@server1 ~]# ls
anaconda-ks.cfg  install.log         varnish-3.0.5-1.el6.x86_64.rpm       :wq
bansys.zip       install.log.syslog  varnish-libs-3.0.5-1.el6.x86_64.rpm
[root@server1 ~]# rm -fr *  删除所有
[root@server1 ~]# ls
[root@server1 ~]# cd /etc/yum.repos.d/
[root@server1 yum.repos.d]# ls
rhel-source.repo
[root@server1 yum.repos.d]# vim rhel-source.repo  配置yum源添加其他的包
[HighAvailability]
name=HighAvailability
baseurl=http://172.25.38.250/source6.5/HighAvailability
gpgcheck=0

[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.38.250/source6.5/LoadBalancer
gpgcheck=0

[ResilientStorage]
name=ResilientStorage
baseurl=http://172.25.38.250/source6.5/ResilientStorage
gpgcheck=0

[ScalableFileSystem]
name=ScalableFileSystem
baseurl=http://172.25.38.250/source6.5/ScalableFileSystem
gpgcheck=0

这里写图片描述

[root@server1 yum.repos.d]# yum repolist     列出可用yum源信息

这里写图片描述
ipvsadm参数详解:

-A --add-service 在内核的虚拟服务器表中添加一条新的虚拟服务器记录。也就是增加一台新的虚拟服务器。
-E --edit-service 编辑内核虚拟服务器表中的一条虚拟服务器记录。
-D --delete-service 删除内核虚拟服务器表中的一条虚拟服务器记录。
-C --clear 清除内核虚拟服务器表中的所有记录。
-R --restore 恢复虚拟服务器规则
-S --save 保存虚拟服务器规则,输出为-R 选项可读的格式
-a --add-server 在内核虚拟服务器表的一条记录里添加一条新的真实服务器记录。也就是在一个虚拟服务器中增加一台新的
真实服务器
-e --edit-server 编辑一条虚拟服务器记录中的某条真实服务器记录
-d --delete-server 删除一条虚拟服务器记录中的某条真实服务器记录
-L|-l --list 显示内核虚拟服务器表
-Z --zero 虚拟服务表计数器清零(清空当前的连接数量等)

安装配置ipvsadm:

[root@server1 yum.repos.d]# yum install -y ipvsadm 安装ipvsadm

这里写图片描述

[root@server1 yum.repos.d]# ipvsadm -L 查看策略
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
[root@server1 yum.repos.d]# ipvsadm -A -t 172.25.38.100:80 -s rr   添加虚拟主机,-A添加虚拟服务,rr调度算法,rr伦叫
[root@server1 yum.repos.d]# ipvsadm -a -t 172.25.38.100:80 -r 172.25.38.3:80 -g    添加到同一个局域网,-g直连模式,在同一个局域网,-t表示tcp协议,-a端口不可改变
[root@server1 yum.repos.d]# ipvsadm -a -t 172.25.38.100:80 -r 172.25.38.4:80 -g
[root@server1 yum.repos.d]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.38.100:http rr
  -> server2:http                 Route   1      0          0         
  -> server3:http                 Route   1      0          0         
[root@server1 yum.repos.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.38.100:80 rr
  -> 172.25.38.3:80               Route   1      0          0         
  -> 172.25.38.4:80               Route   1      0          0       

这里写图片描述

[root@server1 yum.repos.d]# ip addr 查看server1的IP
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:cf:1f:ba brd ff:ff:ff:ff:ff:ff
    inet 172.25.38.2/24 brd 172.25.38.255 scope global eth0
    inet6 fe80::5054:ff:fecf:1fba/64 scope link 
       valid_lft forever preferred_lft forever
[root@server1 yum.repos.d]# ip addr add 172.25.38.100/24 dev eth0  添加虚拟的IP到eth0网卡
[root@server1 yum.repos.d]# ip addr  查看已经添加
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:cf:1f:ba brd ff:ff:ff:ff:ff:ff
    inet 172.25.38.2/24 brd 172.25.38.255 scope global eth0
    inet 172.25.38.100/24 scope global secondary eth0
    inet6 fe80::5054:ff:fecf:1fba/64 scope link 
       valid_lft forever preferred_lft forever

这里写图片描述
在真机测试:(无法连接是因为到达了server2之后没办法继续进行,server2没有100的ip)

[root@foundation38 Desktop]# curl 172.25.38.100
^C
[root@foundation38 Desktop]# curl 172.25.38.100
^C
[root@foundation38 Desktop]# curl 172.25.38.100
c^C

但是在ipvsdm端查看协议会有效果:(只是由于tcp的三次握手无法到达server2和server3)

[root@server1 yum.repos.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.38.100:80 rr
  -> 172.25.38.3:80               Route   1      0          1         
  -> 172.25.38.4:80               Route   1      0          2 

这里写图片描述
在server2和server3添加100的ip保证三次握手可以连接:

在server2添加虚拟IP:

[root@server2 ~]# ip addr add 172.25.38.100/24 dev eth0  添加虚拟IP
[root@server2 ~]# ip addr  查看已经添加
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:d0:24:74 brd ff:ff:ff:ff:ff:ff
    inet 172.25.38.3/24 brd 172.25.38.255 scope global eth0
    inet 172.25.38.100/24 scope global secondary eth0
    inet6 fe80::5054:ff:fed0:2474/64 scope link 
       valid_lft forever preferred_lft forever

这里写图片描述
在server3添加虚拟IP:

[root@server3 ~]# ip addr add 172.25.38.100/24 dev eth0  添加虚拟IP
[root@server3 ~]# ip addr  已经添加
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:59:12:f7 brd ff:ff:ff:ff:ff:ff
    inet 172.25.38.4/24 brd 172.25.38.255 scope global eth0
    inet 172.25.38.100/24 scope global secondary eth0
    inet6 fe80::5054:ff:fe59:12f7/64 scope link 
       valid_lft forever preferred_lft forever

这里写图片描述

保证server2和server3的阿帕其开启,而且有默认访问文件:
在这里插入图片描述
在这里插入图片描述
由于三个IP均是100在同一个局域网(vlan)会有偶然性:并不知道server1会不会被一直当作调度器来使用:

[root@foundation38 Desktop]# curl 172.25.38.100   可以进行负载均衡
www.westos.org
[root@foundation38 Desktop]# curl 172.25.38.100
bbs.westos.html
[root@foundation38 Desktop]# curl 172.25.38.100
www.westos.org
[root@foundation38 Desktop]# curl 172.25.38.100
bbs.westos.html
[root@foundation38 Desktop]# arp -an |grep 100  由于调度器是server1,最开始有了缓存
? (172.25.38.100) at 52:54:00:cf:1f:ba [ether] on br0

这里写图片描述

[root@foundation38 Desktop]# arp -d 172.25.38.100  清除缓存
[root@foundation38 Desktop]# arp -an |grep 100 查看为空
? (172.25.38.100) at <incomplete> on br0
[root@foundation38 Desktop]# ping 172.25.38.100 重新连接一下虚拟IP
PING 172.25.38.100 (172.25.38.100) 56(84) bytes of data.
[root@foundation38 Desktop]# arp -an |grep 100  已经变成server2的MAC地址所以不会进行轮询,这就是偶然性的问题
? (172.25.38.100) at 52:54:00:d0:24:74 [ether] on br0
[root@foundation38 Desktop]# curl 172.25.38.100
www.westos.org
[root@foundation38 Desktop]# curl 172.25.38.100
www.westos.org
[root@foundation38 Desktop]# curl 172.25.38.100
www.westos.org

这里写图片描述
通过添加访问server2和server3时服务被拒绝的策略来解决偶然性:

在server2安装服务添加策略:

[root@server2 ~]# yum install -y arptables_jf  安装服务

这里写图片描述

[root@server2 ~]# arptables -L  查看策略
Chain IN (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       

Chain OUT (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       

Chain FORWARD (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
[root@server2 ~]# arptables -A IN -d 172.25.38.100 -j DROP 添加策略当访问时候直接丢弃
[root@server2 ~]# arptables -A OUT -s 172.25.38.100 -j mangle --mangle-ip-s 172.25.38.3  出去时候以真实的地址
[root@server2 ~]# arptables -L 查看策略已经添加
Chain IN (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
DROP       anywhere             172.25.38.100        anywhere           anywhere           any    any        any        any       

Chain OUT (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
mangle     172.25.38.100        anywhere             anywhere           anywhere           any    any        any        any       --mangle-ip-s server2 

Chain FORWARD (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
[root@server2 ~]# /etc/init.d/arptables_jf save 保存策略
Saving current rules to /etc/sysconfig/arptables:          [  OK  ]
[root@server2 ~]# /etc/init.d/arptables_jf start 开启服务
Flushing all current rules and user defined chains:        [  OK  ]
Clearing all current rules and user defined chains:        [  OK  ]
Applying arptables firewall rules:                         [  OK  ]

这里写图片描述
在server3安装服务添加策略::

[root@server3 ~]# yum install -y arptables_jf  安装服务

这里写图片描述

[root@server3 ~]# arptables -A IN -d 172.25.38.100 -j DROP 添加策略当访问时候直接丢弃
[root@server3 ~]# arptables -A OUT -s 172.25.38.100 -j mangle --mangle-ip-s 172.25.38.4  出去时候以真实的地址
[root@server3 ~]# arptables -L   查看策略已经添加
Chain IN (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
DROP       anywhere             172.25.38.100        anywhere           anywhere           any    any        any        any       

Chain OUT (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
mangle     172.25.38.100        anywhere             anywhere           anywhere           any    any        any        any       --mangle-ip-s server3 

Chain FORWARD (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
[root@server3 ~]# /etc/init.d/arptables_jf save  保存策略
Saving current rules to /etc/sysconfig/arptables:          [  OK  ]
[root@server3 ~]# /etc/init.d/arptables_jf start 开始服务
Flushing all current rules and user defined chains:        [  OK  ]
Clearing all current rules and user defined chains:        [  OK  ]
Applying arptables firewall rules:                         [  OK  ]

这里写图片描述
在真机进行测试:

[root@foundation84 Desktop]# arp -an|grep 100  先查看MAC地址刚才做了验证偶然性
? (192.168.1.100) at 38:a4:ed:17:5a:c9 [ether] on wlp0s20f0u2
? (172.25.254.100) at 52:54:00:42:fc:3a [ether] on br0
[root@foundation84 Desktop]# arp -d 172.25.254.100  删除
[root@foundation84 Desktop]# arp -an|grep 100 查看为空,MAC再不会发生改变解决了偶然性
? (192.168.1.100) at 38:a4:ed:17:5a:c9 [ether] on wlp0s20f0u2
? (172.25.254.100) at <incomplete> on br0
[root@foundation84 Desktop]# ping 172.25.254.100  连接一下虚拟IP
[root@foundation84 Desktop]# curl 172.25.254.100 会进行轮询
bbs.westos.org
[root@foundation84 Desktop]# curl 172.25.254.100
www.westos.org
[root@foundation84 Desktop]# curl 172.25.254.100
bbs.westos.org
[root@foundation84 Desktop]# curl 172.25.254.100
www.westos.org

这里写图片描述
想在网页查看效果在真机添加解析:

[root@foundation38 Desktop]# vim /etc/hosts 添加解析

这里写图片描述
在网页进行测试每刷新一次就会发生轮询:

这里写图片描述
这里写图片描述

iptables和lvs的优先级问题(两者都在INPUT链路)

[root@server3 ~]# iptables -P INPUT DROP 直接将INPUT链设置为丢弃状态直接ssh连接也断了无法连接,可见IPTABLES的优先级高于LVS
[root@server3 ~]# iptables -P INPUT ACCEPT 重新设置为接受状态即可恢复

在server1查看策略:

[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.38.100:80 rr
  -> 172.25.38.3:80               Route   1      0          0         
  -> 172.25.38.4:80               Route   1      0          0       

这里写图片描述
在server2关闭httpd:

[root@server2 html]# /etc/init.d/httpd stop
Stopping httpd:                                            [  OK  ]

这里写图片描述
在真机测试的时候不会进行轮询,server2的阿帕其已经关闭:但是缺少健康检查

[root@foundation38 Desktop]# curl 172.25.38.100
curl: (7) Failed connect to 172.25.38.100:80; Connection refused
[root@foundation38 Desktop]# curl 172.25.38.100
bbs.westos.org
[root@foundation38 Desktop]# curl 172.25.38.100
curl: (7) Failed connect to 172.25.38.100:80; Connection refused
[root@foundation38 Desktop]# curl 172.25.38.100
bbs.westos.org
[root@foundation38 Desktop]# curl 172.25.38.100
curl: (7) Failed connect to 172.25.38.100:80; Connection refused

这里写图片描述
在server3关闭httpd:

[root@server3 html]# /etc/init.d/httpd stop
Stopping httpd:                                            [  OK  ]

这里写图片描述
在真机测试的时候出错,server1作为调度器不会出来工作

[root@foundation84 Desktop]# curl 172.25.254.100
curl: (7) Failed connect to 172.25.254.100:80; Connection refused
[root@foundation84 Desktop]# curl 172.25.254.100
curl: (7) Failed connect to 172.25.254.100:80; Connection refused
[root@foundation84 Desktop]# curl 172.25.254.100
curl: (7) Failed connect to 172.25.254.100:80; Connection refused
[root@foundation84 Desktop]# curl 172.25.254.100
curl: (7) Failed connect to 172.25.254.100:80; Connection refused
[root@foundation84 Desktop]# curl 172.25.254.100
curl: (7) Failed connect to 172.25.254.100:80; Connection refused

这里写图片描述
安装ldirectord可以配合LVS进行健康检查:

在server1安装软件:

[root@server1 ~]# ls
ldirectord-3.9.5-3.1.x86_64.rpm
[root@server1 ~]# yum install ldirectord-3.9.5-3.1.x86_64.rpm -y 安装软件

这里写图片描述

[root@server1 ~]# rpm -ql ldirectord 查看配置文件
/etc/ha.d
/etc/ha.d/resource.d
/etc/ha.d/resource.d/ldirectord
/etc/init.d/ldirectord
/etc/logrotate.d/ldirectord
/usr/lib/ocf/resource.d/heartbeat/ldirectord
/usr/sbin/ldirectord
/usr/share/doc/ldirectord-3.9.5
/usr/share/doc/ldirectord-3.9.5/COPYING
/usr/share/doc/ldirectord-3.9.5/ldirectord.cf
/usr/share/man/man8/ldirectord.8.gz
[root@server1 ~]# cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf /etc/ha.d/
[root@server1 ~]# cd /etc/ha.d/
[root@server1 ha.d]# ls
ldirectord.cf  resource.d  shellfuncs
[root@server1 ha.d]# vim ldirectord.cf  编辑配置文件

这里写图片描述
配置文件内容:
这里写图片描述

[root@server1 ha.d]# ipvsadm -C  清除所有策略
[root@server1 ha.d]# /etc/init.d/ldirectord start  打开服务
[root@server1 ha.d]# ipvsadm -l  
[root@server1 ha.d]# ipvsadm -ln 查看策略

将两个后端服务器的阿帕其关闭:

[root@server3 html]# /etc/init.d/httpd stop
Stopping httpd:                                            [  OK  ]

在这里插入图片描述
这里写图片描述
在server1进行查看策略:

[root@server1 ha.d]# ipvsadm -ln  查看策略
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.38.100:80 rr
  -> 127.0.0.1:80                 Local   1      0          0         
[root@server1 ha.d]# cd /var/www/html/
[root@server1 html]# ls
bansys  class_socket.php  config.php  index.php  purge_action.php  static
[root@server1 html]# rm -fr *
[root@server1 html]# ls
[root@server1 html]# vim index.html  写入阿帕其访问文件
server1 ---->正在维护      
[root@server1 html]# /etc/init.d/httpd start 打开阿帕其
Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using 172.25.38.2 for ServerName
						  [  OK  ]

这里写图片描述

[root@server1 html]# netstat -antlp 查看端口没有80端口
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      909/sshd            
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      985/master          
tcp        0      0 172.25.38.2:22              172.25.38.250:42018         ESTABLISHED 1034/sshd           
tcp        0      0 :::8080                     :::*                        LISTEN      2189/httpd          
tcp        0      0 :::22                       :::*                        LISTEN      909/sshd            
tcp        0      0 ::1:25                      :::*                        LISTEN      985/master          
[root@server1 html]# vim /etc/httpd/conf/httpd.conf  修改配置文件

这里写图片描述

[root@server1 html]# /etc/init.d/httpd restart  重启阿帕其
Stopping httpd:                                            [  OK  ]
Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using 172.25.38.2 for ServerName
                                                           [  OK  ]

这里写图片描述
在真机测试已经有了健康维护:

[root@foundation38 Desktop]# curl 172.25.38.100 检测时候不会出错
server1 ----->此站点正在维护
[root@foundation38 Desktop]# curl 172.25.38.100
server1 ----->此站点正在维护

这里写图片描述
高可用集群和负载均衡集群:

安装keepalived软件的原因就是防止调度器出现错误导致整个系统崩溃

keepalived是集群管理中保证高可用的一个服务软件,用来防止单点故障,这种故障切换是通过VRRP协议来实现的,主节点 
在一定的时间间隔中发送心跳信息的广播包,告诉自己的存活状态信息,当主节点发生故障时,各从节点在一段时间内收到
广 播包,从而判断主节点是否发生了故障,因此会调用自己的接管程序来接管主节点的IP资源和服务,当主节点恢复时,备
节点 会主动释放资源,恢复到接管前状态,从而实现主备故障切换。

1.重新建立一个子盘进行实验操作:

[root@server1 html]# /etc/init.d/ldirectord stop  关闭服务
Stopping ldirectord... success
[root@server1 html]# chkconfig ldirectord off  物理性关闭
[root@server1 html]# cd
[root@server1 ~]# ls
keepalived-2.0.6.tar.gz  ldirectord-3.9.5-3.1.x86_64.rpm
[root@server1 ~]# tar zxf keepalived-2.0.6.tar.gz  解压tar包
[root@server1 ~]# ls
keepalived-2.0.6  keepalived-2.0.6.tar.gz  ldirectord-3.9.5-3.1.x86_64.rpm
[root@server1 ~]# cd keepalived-2.0.6
[root@server1 keepalived-2.0.6]# yum install -y openssl-devel  安装一个依赖性

这里写图片描述

[root@server1 keepalived-2.0.6]# ./configure --prefix=/usr/local/keepalived --with-init=SYSV
[root@server1 keepalived-2.0.6]# make
[root@server1 keepalived-2.0.6]# make install
制作四个软链接
[root@server1 init.d]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d
[root@server1 etc]# ln -s /usr/local/keepalived/etc/keepalived/ /etc
[root@server1 etc]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server1 etc]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@server1 etc]# which keepalived
/sbin/keepalived
[root@server1 etc]# ll /etc/init.d/keepalived
lrwxrwxrwx 1 root root 48 Jul 30 09:47 /etc/init.d/keepalived -> /usr/local/keepalived/etc/rc.d/init.d/keepalived
[root@server1 etc]# chmod +x /usr/local/keepalived/etc/rc.d/init.d/keepalived  赋予权限
[root@server1 etc]# /etc/init.d/keepalived start  服务可以正常开启关闭
Starting keepalived:                                       [  OK  ]
[root@server1 etc]# /etc/init.d/keepalived stop
Stopping keepalived:                                       [  OK  ]
[root@server1 etc]# 

这里写图片描述
在server4安装scp:

[root@server4 ~]# yum whatprovides */scp  寻找安装包
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
rhel-source                                              | 3.9 kB     00:00     
rhel-source/filelists_db                                 | 3.8 MB     00:00     
openssh-clients-5.3p1-94.el6.x86_64 : An open source SSH client applications
Repo        : rhel-source
Matched from:
Filename    : /usr/bin/scp



[root@server4 ~]# yum install -y openssh-clients-5.3p1-94.el6.x86_64 安装

这里写图片描述
将server1的keepalived传给server4:

[root@server1 etc]# cd /usr/local/
[root@server1 local]# ls
bin  etc  games  include  keepalived  lib  lib64  libexec  sbin  share  src
[root@server1 local]# scp -r keepalived/ server4:/usr/local/
The authenticity of host 'server4 (172.25.254.4)' can't be established.
RSA key fingerprint is 72:d4:25:cc:f0:a5:32:80:82:ce:d6:ae:09:28:45:2b.
Are you sure you want to continue connecting (yes/no)? yes

这里写图片描述
在server4制作软链接开启服务:

[root@server4 ~]# cd /usr/local/
[root@server4 local]# ls
bin  etc  games  include  keepalived  lib  lib64  libexec  sbin  share  src
[root@server4 local]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d  制作软链接
[root@server4 local]# ln -s /usr/local/keepalived/etc/keepalived/ /etc
[root@server4 local]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server4 local]#  ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@server4 local]# ll /etc/init.d/keepalived
lrwxrwxrwx 1 root root 48 Jul 30 10:12 /etc/init.d/keepalived -> /usr/local/keepalived/etc/rc.d/init.d/keepalived
[root@server4 local]# chmod +x /usr/local/keepalived/etc/rc.d/init.d/keepalived  赋予权限
[root@server4 local]# /etc/init.d/keepalived start  可以正常的开启关闭服务
Starting keepalived:                                       [  OK  ]
[root@server4 local]# /etc/init.d/keepalived stop
Stopping keepalived:                                       [  OK  ]

这里写图片描述
2.进行高可用负载集群的配置:(server2和server3的httpd要打开)

在server1进行配置并将配置好的文件传递到server4:

[root@server1 etc]# cd keepalived/
[root@server1 keepalived]# ls
keepalived.conf  samples
[root@server1 keepalived]# vim keepalived.conf 
[root@server1 keepalived]# pwd
/etc/keepalived
[root@server1 keepalived]# scp keepalived.conf server4:/etc/keepalived
root@server4's password: 
keepalived.conf                               100%  862     0.8KB/s   00:00    
[root@server1 keepalived]# /etc/init.d/keepalived start
Starting keepalived:                                       [  OK  ]

这里写图片描述
这里写图片描述
server1安装邮件方便查看实验效果:

[root@server1 keepalived]# yum install mailx

这里写图片描述
在server4进行配置,将master改为backup并调整优先级作为备节点:

[root@server4 local]# cd /etc/keepalived/
[root@server4 keepalived]# ls
keepalived.conf  samples
[root@server4 keepalived]# vim keepalived.conf 
[root@server4 keepalived]# /etc/init.d/keepalived restart
Stopping keepalived:                                       [  OK  ]
Starting keepalived:                                       [  OK  ]

在这里插入图片描述
server4安装邮件方便查看实验效果:

[root@server4 keepalived]# yum install mailx

这里写图片描述
在server1关闭服务破坏内核使其不能工作:

[root@server1 keepalived]# /etc/init.d/keepalived stop
Stopping keepalived:                                       [FAILED]
[root@server1 keepalived]# /etc/init.d/keepalived status
keepalived is stopped
[root@server1 keepalived]# echo c >/proc/sysrq-trigger 

这里写图片描述
真机测试依旧可以正常工作(由于高可用机制server1坏掉之后server4会出来顶替工作):RS的httpd服务开启才可以

[root@foundation84 images]# curl 172.25.254.100
bbs.westos.org
[root@foundation84 images]# curl 172.25.254.100
www.westos.org
[root@foundation84 images]# curl 172.25.254.100
bbs.westos.org
[root@foundation84 images]# curl 172.25.254.100
www.westos.org

这里写图片描述
在server4查看server4已经接管了server1的IP达到高可用:

[root@server4 keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:ef:7f:b9 brd ff:ff:ff:ff:ff:ff
    inet 172.25.254.4/24 brd 172.25.254.255 scope global eth0
    inet 172.25.254.100/32 scope global eth0
    inet6 fe80::5054:ff:feef:7fb9/64 scope link 
       valid_lft forever preferred_lft forever
You have new mail in /var/spool/mail/root

这里写图片描述
在server2关闭httpd:

[root@server2 html]# /etc/init.d/httpd stop
Stopping httpd:                                            [  OK  ]

这里写图片描述
在真机测试的时候不会进行轮询

[root@foundation84 images]# curl 172.25.254.100
bbs.westos.org
[root@foundation84 images]# curl 172.25.254.100  三秒时间未到刷新会出现问题
curl: (7) Failed connect to 172.25.254.100:80; Connection refused
[root@foundation84 images]# curl 172.25.254.100
bbs.westos.org

这里写图片描述
将server3的阿帕其关闭:

[root@server3 html]# /etc/init.d/httpd stop
Stopping httpd:                                            [  OK  ]

这里写图片描述
在真机测试本地不会访问调度器(server1)的默认发布文件:

[root@foundation84 images]# curl 172.25.254.100
curl: (7) Failed connect to 172.25.254.100:80; Connection refused
[root@foundation84 images]# curl 172.25.254.100
curl: (7) Failed connect to 172.25.254.100:80; Connection refused
[root@foundation84 images]# curl 172.25.254.100
curl: (7) Failed connect to 172.25.254.100:80; Connection refused
[root@foundation84 images]# curl 172.25.254.100
curl: (7) Failed connect to 172.25.254.100:80; Connection refused

这里写图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值