1.前置环境安装准备
1.1主机环境
ip地址 | 主机名称 | 系统 |
---|---|---|
192.168.199.111 | master | centos7 |
192.168.199.112 | node1 | centos7 |
192.168.199.113 | node2 | centos7 |
1.2 其他软件版本
软件名称 | 版本 | 链接地址 |
---|---|---|
java | 1.8.0_161-b12 | |
nginx | nginx-1.9.9 | https://nginx.org/download/nginx-1.9.9.tar.gz |
flume | 1.9.0-cdh6.2.0 | https://github.com/cloudera/flume-ng/tree/cdh6.2.0 |
1.3 部署架构图
2.软件安装
2.1nginx安装
1.下载nginux 安装包并解压
wget https://nginx.org/download/nginx-1.9.9.tar.gz
tar -zxvf nginx-1.9.9.tar.gz
2.使用yum 命令安装nginux 前置依赖代码如下:
yum -y install make zlib-devel gcc-c++ libtool openssl openssl-devel
3.解压编译并安装nginx
tar -zxvf nginx-1.9.9.tar.gz
cd /usr/local/nginx-1.9.9
./configure --prefix=/usr/local/nginx
make && make install
4.启动和测试
安装完成后nginx 在/usr/local/nginx位置
启动nginx
cd /usr/local/nginx
./nginx
服务默认端口为80 ,安装服务器为192.168.0.113 输入地址访问http://192.168.199.113:80/是否配置成功。
显示以下页面完成配置。
如无法访问检查防火墙开发端口或关闭防火墙
# 查看防火墙开放的端口号列表
firewall-cmd --list-all
# 设置开发的端口号
firewall-cmd --add-service=http --permanent
sudo firewall-cmd --add-port=80/tcp --permanent
# 重启防火墙
firewall-cmd --reload
其他操作命令
# nginx 关闭命令
./nginx -s stop
# 重新加载nginx
./nginx -s reload
5.配置负载均衡
将node1和node2节点配置nginx 负载均衡
向nginx 添加如下配置:
http{
upstream flumeserver{
server 192.168.199.111:8080;
server 192.168.199.112:8080;
server 192.168.199.113:8080;
}
server {
location / {
proxy_pass http://flumeserver;
proxy_connect_timeout 10;
}
}
}
6.nginx LVS 配置
keepalived的安装
yum install keepalived -y
主服务器配置:
keepalived 安装进行配置。修改keepalived的配置文件:/etc/keepalived/keepalived.conf
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_http_port {
script "/usr/local/src/nginx_check.sh"
interval 2 #(检测脚本执行的间隔)
weight 2
}
vrrp_instance VI_1 {
state MASTER # 备份服务器上将 MASTER 改为 BACKUP
interface ens33 //网卡
virtual_router_id 51 # 主、备机的 virtual_router_id 必须相同
priority 100 # 主、备机取不同的优先级,主机值较大,备份机值较小
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.200.50 // VRRP H 虚拟地址
}
}
配置 nginx 存活检查脚本
vim /usr/local/src/nginx_check.sh
#!/bin/bash
A=`ps -C nginx –no-header |wc -l`
if [ $A -eq 0 ];then
/usr/local/nginx/sbin/nginx
sleep 2
if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
killall keepalived
fi
fi
备用服务器配置
keepalived 安装进行配置。修改keepalived的配置文件:/etc/keepalived/keepalived.conf
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_http_port {
script "/usr/local/src/nginx_check.sh"
interval 2 #(检测脚本执行的间隔)
weight 2
}
vrrp_instance VI_1 {
state BACKUP # 备份服务器上将 MASTER 改为 BACKUP
interface ens33 //网卡
virtual_router_id 51 # 主、备机的 virtual_router_id 必须相同
priority 90 # 主、备机取不同的优先级,主机值较大,备份机值较小
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.200.50 // VRRP H 虚拟地址
}
}
在node2 上复制nginx_check.sh 到node1上
scp -r /usr/local/src/nginx_check.sh root@node1://usr/local/src
分别node1和node2 启动keepalived
systemctl start keepalived.service
2.2 flume 安装
master ,node1,node2机器上分别部署flume,部署方式非常简单解压即可
cd /usr/local
tar -zxvf apache-flume-1.9.0-cdh6.2.0-bin.tar.gz
配置flume http source,memory channel,kafka sink
vi flume-conf.properties
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = http
a1.sources.r1.port = 8080
a1.sources.r1.channels = c1
a1.sources.r1.handler = org.apache.flume.source.http.JSONHandler
a1.sources.r1.handler.nickname = random props
a1.sources.r1.HttpConfiguration.sendServerVersion = false
a1.sources.r1.ServerConnector.idleTimeout = 300
# Describe the sink
a1.sinks.k1.channel = c1
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = mytopic
a1.sinks.k1.kafka.bootstrap.servers = localhost:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.k1.kafka.producer.compression.type = snappy
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
3.测试
3.1 使用postman 进行测试
输入http url
[{
"headers" : {
"timestamp" : "434324343",
"host" : "random_host.example.com"
},
"body" : "random_body"
},
{
"headers" : {
"namenode" : "namenode.example.com",
"datanode" : "random_datanode.example.com"
},
"body" : "really_random_body"
}]
3.2 命令行查看kafka是否消费
bin/kafka-console-consumer.sh --topic mytopic --from-beginning --bootstrap-server localhost:9092