高可用的作用
高可用是资源池中的某些物理主机出现故障后,故障物理主机上的虚拟机会在资源池内其他正常的物理主机上启动,从而保障资源池安全可靠的持续运行,是服务器虚拟化软件的常见功能。
搭建步骤
本次实验用四个虚拟机,分别为server1,server2,server3,server4,server1作为主调度器,server4作为辅助调度器,当主调度器故障后辅助调度会代替工作。
-
server1:配置yum源
vim /etc/yum.repos.d/rhel-source.repo
[rhel-source] name=Red Hat Enterprise Linux $releasever - $basearch - Source baseurl=http://172.25.61.250/rhel6.5 enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [LoadBalancer] name=LoadBalancer baseurl=http://172.25.61.250/rhel6.5/LoadBalancer gpgcheck=0 [HighAvailability] name=HighAvailability baseurl=http://172.25.61.250/rhel6.5/HighAvailability gpgcheck=0
-
在主调度器上安装源码包keepalived-2.0.6
解压 :tar zxf keepalived-2.0.6.tar.gz
-
安装编译所需软件
yum install -y gcc libnl libnl-devel yum install -y openssl-devel yum install -y libnfnetlink-devel-1.0.0-1.el6.x86_64.rpm
-
进入解压好的目录下,编译,安装
cd keepalived-2.0.6 ./configure --prefix=/usr/local/keepalive --with-init=SYSV make && make install
看到 Use IPVS Framework 为yes就算编译安装完成
5.将主调度器上编译完成的keeplived发给辅助rs一份
scp /usr/local/keepalived/ server4:/usr/local
chmod +x /usr/local/keepalived/etc/rc.d/init.d/keepalived ##给予可执行权限
ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ ###制作软链接
ln -s /usr/local/keepalived/etc/keepalived/ /etc/
ln -s /usr/local/keepalived/sbin/keepalived /sbin/
辅助rs相同的步骤
- 安装调度器
yum install ipvsadm -y
编写主rs的keepalived主配置文件
vim /etc/keepalived/keepalived.conf
1 global_defs {
2 notification_email {
3 root@localhost
4 }
5 notification_email_from Alexandre.Cassen@firewall.loc
6 smtp_server 127.0.0.1
7 smtp_connect_timeout 30
8 router_id LVS_DEVEL
9 vrrp_skip_check_adv_addr
10 #vrrp_strict
11 vrrp_garp_interval 0
12 vrrp_gna_interval 0
13 }
14
15 vrrp_instance VI_1 {
16 state MASTER
17 interface eth0
18 virtual_router_id 51
19 priority 100
20 advert_int 1
21 authentication {
22 auth_type PASS
23 auth_pass 1111
24 }
25 virtual_ipaddress {
26 172.25.61.100
27 }
28 }
29
30 virtual_server 172.25.61.100 80 {
31 delay_loop 3
32 lb_algo rr
33 lb_kind DR
34 #persistence_timeout 50
35 protocol TCP
36
37 real_server 172.25.61.2 80 {
38 TCP_CHECK{
39 weight 1
40 connect_timeout 3
41 retry 3
42 delay_before_retry 3
43 }
44 }
45
46 real_server 172.25.61.3 80 {
47 TCP_CHECK{
48 weight 1
49 connect_timeout 3
50 retry 3
51 delay_before_retry 3
52 }
53 }
54 }
备用调度器上配置文件要将MASTER改为BACKUP,priority权重系数改为50
-
server2和server3的apache服务开启
-
server1和server4(即主调度器和辅助调度器)的服务打开
/etc/init.d/keepalived start
-
测试
在真机上测试
发现server2和server3轮循调度[root@foundation61 ~]# curl 172.25.61.100 <h1>LVS server3</h1> [root@foundation61 ~]# curl 172.25.61.100 <h1>LVS server2</h1> [root@foundation61 ~]# curl 172.25.61.100 <h1>LVS server3</h1> [root@foundation61 ~]# curl 172.25.61.100 <h1>LVS server2</h1>
将主调度器上把keepalived服务停掉,在测试发现server2和server3仍然轮循,但ip已经漂移到副调度器上
[root@foundation61 ~]# curl 172.25.61.100
<h1>LVS server3</h1>
[root@foundation61 ~]# curl 172.25.61.100
<h1>LVS server2</h1>
[root@foundation61 ~]# curl 172.25.61.100
<h1>LVS server3</h1>
[root@foundation61 ~]# curl 172.25.61.100
<h1>LVS server2</h1>
[root@server4 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:3f:af:58 brd ff:ff:ff:ff:ff:ff
inet 172.25.61.4/24 brd 172.25.61.255 scope global eth0
inet 172.25.61.100/32 scope global eth0
inet6 fe80::5054:ff:fe3f:af58/64 scope link
valid_lft forever preferred_lft forever