保姆式 Hadoop之Flume日志采集系统的部署与使用

一、准备工作

1、前期已经安装好了Hadoop,并且HDFS和YARN服务能正常运行;

2、flume的安装包,apache-flume-1.9.0-bin.tar.gz;

3、远程连接设备xftp8和xshell8;

4、保证主机的网络通畅;

二、上传并解压

1、上传

通过xftp8将本地已经下载好的flume安装包上传至Hadoop1中

2、解压

命令如下:

tar -zxvf apache-flume-1.9.0-bin.tar.gz -C /export/servers/

3、重命名

(1)切换目录,命令如下:

cd /export/servers

(2)重命名,命令如下:

mv apache-flume-1.9.0-bin flume-1.9.0

 

三、配置环境变量

vi /etc/profile
# 添加内容如下:
export FLUME_HOME=/export/servers/flume-1.9.0
export PATH=$PATH:$FLUME_HOME/bin
#配置完成后,初始化环境变量:
source /etc/profile

四、分发

1、分发Hadoop1上的flume-1.9.0的配置文件

scp -r /export/servers/flume-1.9.0 hadoop2:/export/servers/
scp -r /export/servers/flume-1.9.0 hadoop3:/export/servers/

2、分发Hadoop1上的环境变量

scp /etc/profile hadoop2:/etc/profile
scp /etc/profile hadoop3:/etc/profile

3、初始化环境变量

分发完成后,到Hadoop2、Hadoop3中初始化环境变量:

source /etc/profile

五、编写采集方案

1、编写Hadoop1的采集方案

cd /export/servers/flume-1.9.0/conf/ 
vi exec-avro.conf

#编辑内容如下:
a1.sources = r1 
a1.sinks = k1 k2 
a1.channels = c1 
a1.sources.r1.channels = c1 
a1.sources.r1.type = exec 
a1.sources.r1.command = tail -F /export/data/123.log
a1.channels.c1.type = memory 
a1.channels.c1.capacity = 1000 
a1.sinks.k1.channel = c1 
a1.sinks.k1.type = avro 
a1.sinks.k1.hostname = hadoop2 
a1.sinks.k1.port = 53421

a1.sinks.k2.channel = c1 
a1.sinks.k2.type = avro 
a1.sinks.k2.hostname = hadoop3 
a1.sinks.k2.port = 53421 
a1.sinkgroups = g1 
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = load_balance
a1.sinkgroups.g1.processor.backoff = true
a1.sinkgroups.g1.processor.selector = random
a1.sinkgroups.g1.processor.maxTimeOut=10000

2、编写Hadoop2的采集方案

cd /export/servers/flume-1.9.0/conf/
vi avro-logger1.conf

# 编辑内容如下:
a1.sources = r1 
a1.sinks = k1
a1.channels = c1
a1.sources.r1.type = avro
a1.sources.r1.bind = hadoop2
a1.sources.r1.port = 53421
a1.sinks.k1.type = logger
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

3、编写Hadoop3的采集方案

cd /export/servers/flume-1.9.0/conf/
vi avro-logger2.conf

# 编辑内容如下:(除第五行的hadoop3,其余地方与hadoop2相同)
a1.sources = r1
a1.sinks = k1
a1.channels = c1
a1.sources.r1.type = avro
a1.sources.r1.bind = hadoop3
a1.sources.r1.port = 53421
a1.sinks.k1.type = logger
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

六、采集数据

1、Hadoop2开始采集

source /etc/profile 

cd /export/servers/flume-1.9.0/

flume-ng agent --name a1 --conf conf/ --conf-file conf/avro-logger1.conf -Dflume.root.logger=INFO,console

 

2、Hadoop3开始采集

source /etc/profile 

cd /export/servers/flume-1.9.0/

flume-ng agent --name a1 --conf conf/ --conf-file conf/avro-logger1.conf -Dflume.root.logger=INFO,console

3、Hadoop1开始采集

source /etc/profile 

cd /export/servers/flume-1.9.0/ 

flume-ng agent --name a1 --conf conf/ --conf-file conf/exec-avro.conf -Dflume.root.logger=INFO,console 

4、hadoop1 的命令行执行

(注:再在xshell8中打开一个Hadoop1的端口执行) 

while true;do echo "lxz flume flume ..." >> /export/data/123.log;sleep 1;done

最后可以用CTRL+C结束掉采集的进程.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值