一.lvs的DR模式
- route -> F5 -> lvs(4) -> nginx(7)/haproxy -> web
主机配置 server1 (vs:VirtualServer)
1.安装ipvsadm
注意使用:rhel6.5:需要对yum源配置
[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.90.250/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[HighAvailability]
name=HighAvailability
baseurl=http://172.25.90.250/rhel6.5/HighAvailability
gpgcheck=0
[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.90.250/rhel6.5/LoadBalancer
gpgcheck=0
[ResilientStorage]
name=ResilientStorage
baseurl=http://172.25.90.250/rhel6.5/ResilientStorage
gpgcheck=0
[ScalableFileSystem]
name=ScalableFileSystem
baseurl=http://172.25.90.250/rhel6.5/ScalableFileSystem
gpgcheck=0
2.配置虚拟ip(vip)
[root@server1 ~]# ipvsadm -C
[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@server1 ~]# ip addr add 172.25.90.100/32 dev eth0
[root@server1 ~]# ipvsadm -A -t 172.25.90.100:80 -s rr
[root@server1 ~]# ipvsadm -a -t 172.25.90.100:80 -r 172.25.90.2:80 -g
[root@server1 ~]# ipvsadm -a -t 172.25.90.100:80 -r 172.25.90.3:80 -g
[root@server1 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.90.100:http rr
-> server2:http Route 1 0 0
-> server3:http Route 1 0 0
[root@server1 ~]# /etc/init.d/ipvsadm save
ipvsadm: Saving IPVS table to /etc/sysconfig/ipvsadm: [ OK ]
server2主机配置(RS:RealServer)
1.安装httpd、arptables_jf
2.设置http默认发布文件 /var/www/html/index.html
[root@server2 ~]# curl localhost
<h1>westos.org<h1>
3.配置虚拟IP
[root@server2 ~]# /etc/init.d/arptables_jf start
[root@server2 ~]# ip addr add 172.25.90.100/32 dev eth0
[root@server2 ~]# arptables -A IN -d 172.25.90.100 -j DROP
[root@server2 ~]# arptables -A OUT -s 172.25.90.100 -j mangle --mangle-ip-s 172.25.90.2
[root@server2 ~]# /etc/init.d/arptables_jf save
Saving current rules to /etc/sysconfig/arptables: [ OK ]
server3主机配置 (RS:RealServer)
1.安装httpd、arptables_jf
2.设置http默认发布文件 /var/www/html/index.html
[root@server3 ~]# curl localhost
<h1>www.westos.org<h1>
3.配置虚拟IP
[root@server3 ~]# ip addr add 172.25.90.100/32 dev eth0
[root@server3 ~]# /etc/init.d/arptables_jf start
Flushing all current rules and user defined chains: [ OK ]
Clearing all current rules and user defined chains: [ OK ]
Applying arptables firewall rules: [ OK ]
[root@server3 ~]# arptables -A IN -d 172.25.90.100 -j DROP
[root@server3 ~]# arptables -A OUT -s 172.25.90.100 -j mangle --mangle-ip-s 172.25.90.3
[root@server3 ~]# /etc/init.d/arptables_jf save
Saving current rules to /etc/sysconfig/arptables: [ OK ]
物理主机
访问 172.25.90.100 实现负载均衡
[kiosk@foundation90 Desktop]$ curl 172.25.90.100
<h1>www.westos.org<h1>
[kiosk@foundation90 Desktop]$ curl 172.25.90.100
<h1>westos.org<h1>
[kiosk@foundation90 Desktop]$ curl 172.25.90.100
<h1>www.westos.org<h1>
[kiosk@foundation90 Desktop]$ curl 172.25.90.100
<h1>westos.org<h1>
查看172.25.90.100的mac地址
[kiosk@foundation90 Desktop]$ arp -an | grep 100
? (172.25.90.100) at 52:54:00:8d:fe:bc [ether] on br0
如果该地址与server1相同说明服务ok,实现负载均衡 client -> vs -> rs -> client
二.lvs健康检查
1.安装管理软件 install ldirectord-3.9.5-3.1.x86_64.rpm
2.修改配置文件 rpm -pql ldirectord.x86_64 (查找配置文件)
[root@server1 ~]# cd /etc/ha.d/
[root@server1 ha.d]# ls
ldirectord.cf resource.d shellfuncs
[root@server1 ha.d]# vim ldirectord.cf
# Sample for an http virtual service
virtual=172.25.90.100:80
real=172.25.90.2:80 gate
real=172.25.90.3:80 gate
fallback=127.0.0.1:80 gate
service=http
scheduler=rr
#persistent=600
#netmask=255.255.255.255
protocol=tcp
checktype=negotiate
checkport=80
request="index.html"
# receive="Test Page"
# virtualhost=www.x.y.z
3.清空ipvsadm策略,启动ldirectord.x86_64服务
[root@server1 ha.d]# ipvsadm -C
[root@server1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@server1 ha.d]# /etc/init.d/ldirectord start
Starting ldirectord... success
[root@server1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.90.100:80 rr
-> 172.25.90.2:80 Route 1 0 0
-> 172.25.90.3:80 Route 1 0 0
4.当Real Server主机关闭http服务时:
[root@server1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.90.100:80 rr
-> 127.0.0.1:80 Local 1 0 0
[root@server1 ha.d]# curl localhost
此网站正在维护。。。。
此时物理机访问172.25.254.100
[kiosk@foundation90 Desktop]$ curl 172.25.90.100
此网站正在维护。。。。
注意:当安装 php 模块后,默认优先读取 index.php
三、高可用 两台主机作Virtual Host
1.server1主机停止 ldirectord 服务
[root@server1 ha.d]# /etc/init.d/ldirectord stop
Stopping ldirectord... success
[root@server1 ha.d]# chkconfig ldirectord off
2.server1和server4主机安装scp服务
yum install openssh-clients.x86_64 -y
配置server4主机yum源,安装ipvsadm
3.server1主机安装keepalived
注意:compile报错时,解决依赖性,安装openssl-devel
[root@server1 ~]# tar zxf keepalived-1.4.3.tar.gz
[root@server1 ~]# cd keepalived-1.4.3
[root@server1 keepalived-1.4.3]# ./configure --prefix=/usr/local/keepalived --with-init=SYSV
[root@server1 keepalived-1.4.3]# make && make install
4.配置keepalived服务
[root@server1 init.d]# cd /usr/local/keepalived/etc/rc.d/init.d/
[root@server1 init.d]# ll
total 4
-rwxr-xr-x 1 root root 1308 Jun 20 14:35 keepalived
[root@server1 init.d]# ln -s /usr/local/keepalived/etc/keepalived/ /etc
[root@server1 init.d]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server1 init.d]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
[root@server1 init.d]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@server1 init.d]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
# vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.90.100
}
}
virtual_server 172.25.90.100 80 {
delay_loop 1
lb_algo rr
lb_kind DR
# persistence_timeout 50
protocol TCP
real_server 172.25.90.2 80 {
weight 1
TCP_CHECK {
connect_timeout 3
retry 3
delay_before_retry 3
}
}
real_server 172.25.90.3 80 {
weight 1
weight 1
TCP_CHECK {
connect_timeout 3
retry 3
delay_before_retry 3
}
}
}
5.server4主机进行keepalived配置
server1主机:scp -r /etc/keepalived/keepalived.conf server4:/etc/keepalived/keepalived.conf
server4修改配置文件:vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state BACKUP ##热备
interface eth0
virtual_router_id 51
priority 50 ##优先级
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.90.100
}
}
6.实现高可用和健康检查
server1主机和server4主机:
安装mailx服务
启动ipvsadm
加载keepalived (reload)
server2主机和server3主机:
http服务正常
默认发布文件ok
7.物理主机实验
实验时,可删除vip,停止keepalived服务,停止网络服务,刷掉内核
注意:手动删除vip时,keepalived服务失效
实验效果如下:当server1出现问题时server4立刻替代server1的工作当server1修复时因为server1级别更高它有会替代server4工作
[kiosk@foundation90 Desktop]$ curl 172.25.90.100
<h1>www.westos.org<h1>
[kiosk@foundation90 Desktop]$ curl 172.25.90.100
<h1>westos.org<h1>
[kiosk@foundation90 Desktop]$ arp -an | grep 100
? (172.25.90.100) at 52:54:00:8d:fe:bc [ether] on br0
? (172.25.254.100) at 52:54:00:8d:fe:bc [ether] on br0
[kiosk@foundation90 Desktop]$ curl 172.25.90.100
<h1>www.westos.org<h1>
[kiosk@foundation90 Desktop]$ curl 172.25.90.100
<h1>westos.org<h1>
[kiosk@foundation90 Desktop]$ arp -an | grep 100
? (172.25.90.100) at 52:54:00:f5:93:ac [ether] on br0
? (172.25.254.100) at 52:54:00:8d:fe:bc [ether] on br0
lvs的NAT模式
1.server1 (vs) 配置:
添加两块网卡(可以使用一块网卡 添加VIP作为外网通信但是不推荐!!会出现延迟,超时现象)
清除以前的规则
[root@server1 ~]# ipvsadm -C
[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
配置网络
[root@server1 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.25.90.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
172.25.254.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
0.0.0.0 172.25.90.1 0.0.0.0 UG 0 0 0 eth0
打开内部路由设置
vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
[root@server1 ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
添加ipvsadm规则
[root@server1 ~]# ipvsadm -A -t 172.25.254.100:80 -s rr
[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.90.2:80 -m
[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.90.3:80 -m
[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:80 rr
-> 172.25.90.2:80 Masq 1 0 0
-> 172.25.90.3:80 Masq 1 0 0
[root@server1 ~]# modprobe iptabel_nat #如果不加载此模块再次访问会出现延迟过长,或访问超
时现象
server2.server3 (RS) 只需要把网关设置为server1的内网通信ip即可 其他不做改变
物理机测试ok
[kiosk@foundation90 Desktop]$ curl 172.25.254.100
<h1>www.westos.org<h1>
[kiosk@foundation90 Desktop]$ curl 172.25.254.100
<h1>westos.org<h1>
[kiosk@foundation90 Desktop]$ curl 172.25.254.100
<h1>www.westos.org<h1>
[kiosk@foundation90 Desktop]$ curl 172.25.254.100
<h1>westos.org<h1>
lvs的TUN模式(隧道模式)
配置server1 ( vs )
设置规则:
[root@server1 ~]# ipvsadm -C
[root@server1 ~]# ip addr add 172.25.90.100/32 dev eth0
[root@server1 ~]# ipvsadm -A -t 172.25.90.100:80 -s rr
[root@server1 ~]# ipvsadm -a -t 172.25.90.100:80 -r 172.25.90.2:80 -i
[root@server1 ~]# ipvsadm -a -t 172.25.90.100:80 -r 172.25.90.3:80 -i
禁用rp_filter内核和打开内部路由
[root@server1 ~]# vim /etc/sysctl.conf
[root@server1 ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
server2(RS)
安装arptables_jf
用arptables将其的访问全部DROP,出去的包全部转化为本机的ip
[root@server2 ~]# arptables -A IN -d 172.25.90.100 -j DROP #进来的包全部丢弃
[root@server2 ~]# arptables -A OUT -s 172.25.90.100 -j mangle-ip-s 172.25.90.2 #出去的包转化为本机ip
Bad argument `172.25.90.2'
Try `arptables -h' or 'arptables --help' for more information.
[root@server2 ~]# arptables -A OUT -s 172.25.90.100 -j mangle --mangle-ip-s 172.25.90.2
[root@server2 ~]# arptables -L
Chain IN (policy ACCEPT)
target source-ip destination-ip source-hw destination-hw hlen op hrd pro
DROP anywhere 172.25.90.100 anywhere anywhere any any any any
Chain OUT (policy ACCEPT)
target source-ip destination-ip source-hw destination-hw hlen op hrd pro
mangle 172.25.90.100 anywhere anywhere anywhere any any any any --mangle-ip-s server2
Chain FORWARD (policy ACCEPT)
target source-ip destination-ip source-hw destination-hw hlen op hrd pro
[root@server2 ~]# /etc/init.d/arptables_jf save
Saving current rules to /etc/sysconfig/arptables: [ OK ]
添加隧道tunl
[root@server2 ~]# ifconfig tunl0 172.25.90.100 netmask 255.255.255.255 up
[root@server2 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 52:54:00:87:99:C2
inet addr:172.25.90.2 Bcast:172.25.90.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe87:99c2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1386 errors:0 dropped:0 overruns:0 frame:0
TX packets:746 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:125982 (123.0 KiB) TX bytes:97255 (94.9 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
tunl0 Link encap:IPIP Tunnel HWaddr
inet addr:172.25.90.100 Mask:255.255.255.255
UP RUNNING NOARP MTU:1480 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
[root@server2 ~]# route add -host 172.25.90.100 dev tunl0 #添加路由接口,确保从隧道进来的包由隧道出去
[root@server2 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.25.90.100 0.0.0.0 255.255.255.255 UH 0 0 0 tunl0
172.25.90.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
0.0.0.0 172.25.90.1 0.0.0.0 UG 0 0 0 eth0
同样禁用rp_filter内核
server2和server3相同
物理机测试ok
[kiosk@foundation90 Desktop]$ curl 172.25.254.100
<h1>www.westos.org<h1>
[kiosk@foundation90 Desktop]$ curl 172.25.254.100
<h1>westos.org<h1>
[kiosk@foundation90 Desktop]$ curl 172.25.254.100
<h1>www.westos.org<h1>
[kiosk@foundation90 Desktop]$ curl 172.25.254.100
<h1>westos.org<h1>