Nginx负载均衡&高可用配置
环境说明:
各主机均已关闭防火墙与SELinux。
| 主机名 | IP地址 | 应用服务 | 系统 |
|---|---|---|---|
| LB01 | 192.168.92.130 | keepalived nginx | Centos8 |
| LB02 | 192.168.92.129 | keepalived nginx | Centos8 |
| RS01 | 192.168.92.132 | nginx | Centos8 |
| RS02 | 192.168.92.133 | nginx | Centos8 |
需求:
LB01做主负载均衡器,LB02做备负载均衡器,VIP设为192.168.92.200。RS01与RS02做实际处理业务请求的服务器。
部署RS
RS01主机配置
#安装nginx
[root@RS01 ~]# yum -y install nginx
#先将原首页文件备份,再定义新的首页文件内容
[root@RS01 ~]# cd /usr/share/nginx/html/
[root@RS01 html]# ls
404.html 50x.html index.html nginx-logo.png poweredby.png
[root@RS01 html]# mv index.html{,.bak}
[root@RS01 html]# echo 'This is RS01.' > index.html
[root@RS01 html]# ls
404.html 50x.html index.html index.html.bak nginx-logo.png poweredby.png
#启动nginx并设为开机自启
[root@RS01 html]# systemctl enable --now nginx.service
RS02主机配置
[root@RS02 ~]# dnf -y install nginx
[root@RS02 ~]# cd /usr/share/nginx/html/
[root@RS02 html]# mv index.html{,.bak}
[root@RS02 html]# echo "This is RS02." > index.html
[root@RS02 html]# ls
404.html 50x.html index.html index.html.bak nginx-logo.png poweredby.png
[root@RS02 html]# systemctl enable --now nginx.service
测试两台RS能否访问
[root@LB01 ~]# curl 192.168.92.132
This is RS01.
[root@LB01 ~]# curl 192.168.92.133
This is RS02.
部署LB
LB01主机做负载均衡
#安装nginx
[root@LB01 ~]# dnf -y install nginx
#修改配置文件前先对原文件做备份,养成身为运维的良好习惯
[root@LB01 ~]# cd /etc/nginx/
[root@LB01 nginx]# cp nginx.conf nginx.conf.bak
[root@LB01 nginx]# ls
conf.d fastcgi_params mime.types nginx.conf.default uwsgi_params.default
default.d fastcgi_params.default mime.types.default scgi_params win-utf
fastcgi.conf koi-utf nginx.conf scgi_params.default
fastcgi.conf.default koi-win nginx.conf.bak uwsgi_params
#配置负载均衡
[root@LB01 nginx]# vim nginx.conf
upstream webserver { #定义后端实际处理业务请求的服务器池
server 192.168.92.132; #RS01的IP
server 192.168.92.133; #RS02的IP
}
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://webserver;
}
[root@LB01 nginx]# systemctl enable --now nginx.service
测试负载均衡:
#因没有分配权重,默认是1:1轮询
[root@LB01 nginx]# curl 192.168.92.130
This is RS01.
[root@LB01 nginx]# curl 192.168.92.130
This is RS02.
[root@LB01 nginx]# curl 192.168.92.130
This is RS01.
[root@LB01 nginx]# curl 192.168.92.130
This is RS02.
LB02主机做负载均衡
#安装nginx
[root@LB02 ~]# dnf -y install nginx
[root@LB02 ~]# cd /etc/nginx/
[root@LB02 nginx]# cp nginx.conf nginx.conf.bak
[root@LB02 nginx]# vim nginx.conf
upstream webserver {
server 192.168.92.132;
server 192.168.92.133;
}
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://webserver;
}
[root@LB02 nginx]# systemctl start nginx.service
测试负载均衡:
[root@LB02 nginx]# curl 192.168.92.129
This is RS01.
[root@LB02 nginx]# curl 192.168.92.129
This is RS02.
[root@LB02 nginx]# curl 192.168.92.129
This is RS01.
[root@LB02 nginx]# curl 192.168.92.129
This is RS02.
#测试完停止nginx服务
[root@LB02 nginx]# systemctl stop nginx.service
部署HA
LB01做主LB
#下载做高可用的软件
[root@LB01 ~]# dnf -y install keepalived
#生成8位数的密码
[root@LB01 keepalived]# strings /dev/urandom |tr -dc A-Za-z0-9 | head -c8; echo
pP5ek1YA
#配置keepalived
[root@LB01 keepalived]# vim keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_instance VI_1 {
state MASTER
interface ens32
virtual_router_id 81
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass pP5ek1YA
}
virtual_ipaddress {
192.168.92.200
}
}
virtual_server 192.168.92.200 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.92.130 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.92.129 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
#开启keepalived并设为开机自启
[root@LB01 ~]# systemctl enable --now keepalived.service
#可以看到VIP已经有了
[root@LB01 ~]# ip a s ens32
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:9e:e3:c1 brd ff:ff:ff:ff:ff:ff
inet 192.168.92.130/24 brd 192.168.92.255 scope global dynamic noprefixroute ens32
valid_lft 1707sec preferred_lft 1707sec
inet 192.168.92.200/32 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe9e:e3c1/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#使用VIP进行访问。如果访问不了又确信配置无误,则极有可能是备负载均衡器服务没停止
[root@LB01 ~]# curl 192.168.92.200
This is RS01.
[root@LB01 ~]# curl 192.168.92.200
This is RS02.
[root@LB01 ~]# curl 192.168.92.200
This is RS01.
[root@LB01 ~]# curl 192.168.92.200
This is RS02.
验证究竟是否是LB01(主)主机在做反向代理
这里有必要简述一下nginx反向代理的工作流程:反向代理服务器接收访问用户的请求后,会代理用户重新发起请求代理下的节点服务器,最后把数据返回给客户端用。所以被代理的节点服务器并不知道客户端的存在,因为它所处理的全部请求都是由代理服务器请求的。
#在LB02主机上用VIP进行访问
[root@LB02 nginx]# curl 192.168.92.200
This is RS01.
[root@LB02 nginx]# curl 192.168.92.200
This is RS02.
[root@LB02 nginx]# curl 192.168.92.200
This is RS01.
[root@LB02 nginx]# curl 192.168.92.200
This is RS02.
#在RS01主机上查看日志
[root@RS01 html]# cd /var/log/nginx/
[root@RS01 nginx]# ls
access.log error.log
#可以看到访问主机的IP确实是LB01这台
[root@RS01 nginx]# tail -f access.log
192.168.92.130 - - [17/Oct/2022:20:41:21 +0800] "GET / HTTP/1.0" 200 14 "-" "curl/7.61.1" "-"
192.168.92.130 - - [17/Oct/2022:20:41:23 +0800] "GET / HTTP/1.0" 200 14 "-" "curl/7.61.1" "-"
LB02做备LB
[root@LB02 ~]# dnf -y install keepalived
[root@LB02 ~]# cd /etc/keepalived/
[root@LB02 keepalived]# mv keepalived.conf{,.bak}
#将LB01主机的keepalived配置文件直接copy过来
[root@LB02 keepalived]# scp root@192.168.92.130:/etc/keepalived/keepalived.conf ./
[root@LB02 keepalived]# ls
keepalived.conf keepalived.conf.bak
#修改配置文件。仅有两个地方需要注意,其一是state,设为backup。其二是priority,一定要比主低
[root@LB02 keepalived]# vim keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
virtual_router_id 81
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass pP5ek1YA
}
virtual_ipaddress {
192.168.92.200
}
}
virtual_server 192.168.92.200 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.92.130 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.92.129 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@LB02 keepalived]# systemctl enable --now keepalived.service
测试主备切换
#模拟主负载均衡器出现故障
[root@LB01 ~]# systemctl stop nginx keepalived.service
#去到备负载均衡器上查看VIP
[root@LB02 ~]# ip a s ens32
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:e2:b1:9f brd ff:ff:ff:ff:ff:ff
inet 192.168.92.129/24 brd 192.168.92.255 scope global dynamic noprefixroute ens32
valid_lft 1317sec preferred_lft 1317sec
inet 192.168.92.200/32 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fee2:b19f/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#启动nginx进行负载均衡
[root@LB02 ~]# systemctl start nginx.service
[root@LB02 ~]# curl 192.168.92.200
This is RS01.
[root@LB02 ~]# curl 192.168.92.200
This is RS02.
[root@LB02 ~]# curl 192.168.92.200
This is RS01.
[root@LB02 ~]# curl 192.168.92.200
This is RS02.
#来到RS01主机上查看访问日志,可以看到此时显示源IP是LB02
[root@RS01 nginx]# tail -f access.log
192.168.92.129 - - [17/Oct/2022:21:10:31 +0800] "GET / HTTP/1.0" 200 14 "-" "curl/7.61.1" "-"
192.168.92.129 - - [17/Oct/2022:21:10:33 +0800] "GET / HTTP/1.0" 200 14 "-" "curl/7.61.1" "-"
#如果你想继续做监控脚本实现半自动主备切换,那么请恢复到LB01为主负载均衡器
[root@LB02 ~]# systemctl stop nginx.service
[root@LB01 ~]# systemctl start nginx.service keepalived.service
[root@LB01 ~]# ip a s ens32
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:9e:e3:c1 brd ff:ff:ff:ff:ff:ff
inet 192.168.92.130/24 brd 192.168.92.255 scope global dynamic noprefixroute ens32
valid_lft 1205sec preferred_lft 1205sec
inet 192.168.92.200/32 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe9e:e3c1/64 scope link noprefixroute
valid_lft forever preferred_lft forever
配置监控脚本实现半自动主备切换
所谓半自动主备切换意思是,当主ka(keepalived)挂掉了,监控脚本检测到后,备ka会自动成为新的主ka。当旧主ka恢复后想要重新成为主卡时需要系统管理员手动切换。
LB01主机配置
[root@LB01 ~]# mkdir /scripts && cd /scripts
[root@LB01 scripts]# vim check_nginx.sh
#!/bin/bash
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -lt 1 ];then
systemctl stop keepalived
fi
[root@LB01 scripts]# vim notify.sh
#!/bin/bash
case "$1" in
master)
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -lt 1 ];then
systemctl start nginx
fi
;;
backup)
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -gt 0 ];then
systemctl stop nginx
fi
;;
*)
echo "Usage:$0 master|backup VIP"
;;
esac
[root@LB01 scripts]# chmod +x check_nginx.sh notify.sh
[root@LB01 scripts]# ll
total 8
-rwxr-xr-x 1 root root 139 Oct 17 23:09 check_nginx.sh
-rwxr-xr-x 1 root root 392 Oct 17 23:20 notify.sh
#将监控脚本配置到keepalived
[root@LB01 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
#填空以下这五行
vrrp_script nginx_check {
script "/scripts/check_nginx.sh"
interval 1
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface ens32
virtual_router_id 81
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass pP5ek1YA
}
virtual_ipaddress {
192.168.92.200
}
track_ipaddress{ #添加以下四行
nginx_check
}
notify_master "/scripts/notify.sh master"
}
virtual_server 192.168.92.200 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.92.130 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.92.129 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@LB01 ~]# systemctl restart keepalived.service
LB02主机配置
backup无需检测nginx是否正常,当升级为MASTER时启动nginx,当降级为BACKUP时关闭
[root@LB02 ~]# mkdir /scripts && cd /scripts
[root@LB02 scripts]# scp root@192.168.92.130:/scripts/notify.sh ./
[root@LB02 scripts]# cat notify.sh
#!/bin/bash
case "$1" in
master)
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -lt 1 ];then
systemctl start nginx
fi
;;
backup)
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -gt 0 ];then
systemctl stop nginx
fi
;;
*)
echo "Usage:$0 master|backup VIP"
;;
esac
[root@LB02 scripts]# ll
total 4
-rwxr-xr-x 1 root root 376 Oct 17 23:34 notify.sh
[root@LB02 scripts]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
virtual_router_id 81
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass pP5ek1YA
}
virtual_ipaddress {
192.168.92.200
}
notify_master "/scripts/notify.sh master" #添加这两行
notify_backup "/scripts/notify.sh backup"
}
virtual_server 192.168.92.200 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.92.130 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.92.129 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@LB02 scripts]# systemctl restart keepalived.service
测试配置监控脚本是否能自动进行主备切换
#目前VIP在LB01主机上,说明此时还是主
[root@LB01 ~]# ip a s ens32
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:9e:e3:c1 brd ff:ff:ff:ff:ff:ff
inet 192.168.92.130/24 brd 192.168.92.255 scope global dynamic noprefixroute ens32
valid_lft 1534sec preferred_lft 1534sec
inet 192.168.92.200/32 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe9e:e3c1/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#手动停止nginx均衡负载器,模拟故障
[root@LB01 ~]# systemctl stop nginx.service
#可以看到由于负载均衡器挂掉了,运行脚本停掉了keepalived。VIP也不在了
[root@LB01 scripts]# systemctl status keepalived.service
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Mon 2022-10-17 23:42:38 CST; 10s ago
[root@LB01 scripts]# ip a s ens32
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:9e:e3:c1 brd ff:ff:ff:ff:ff:ff
inet 192.168.92.130/24 brd 192.168.92.255 scope global dynamic noprefixroute ens32
valid_lft 1326sec preferred_lft 1326sec
inet6 fe80::20c:29ff:fe9e:e3c1/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#此时去到LB02查看VIP,可以看到VIP在这台负载均衡器上了
[root@LB02 ~]# ip a s ens32
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:e2:b1:9f brd ff:ff:ff:ff:ff:ff
inet 192.168.92.129/24 brd 192.168.92.255 scope global dynamic noprefixroute ens32
valid_lft 1230sec preferred_lft 1230sec
inet 192.168.92.200/32 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fee2:b19f/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#可以看到nginx的默认80也随之启用
[root@LB02 ~]# ss -anlt | grep 80
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
#要想再次启用LB01为主,则需自行手动启动nginx与keepalived服务
本文介绍了如何配置Nginx作为负载均衡器实现高可用性。通过部署RS01和RS02作为业务服务器,设置LB01为主负载均衡器,LB02为备用,并详细阐述了负载均衡的测试、HA部署以及使用监控脚本来实现半自动的主备切换过程。
1746

被折叠的 条评论
为什么被折叠?



