flume(四):扇出

Apache日志扇出至HDFS
本文介绍如何使用Flume配置Apache日志扇出功能,将日志数据同时输出到HDFS上的两个不同位置,详细展示了配置文件的设置及启动Flume的步骤。

1.实现功能

监控apache日志,将日志输出到hdfs上的两个文件位置,实现扇出功能。

2.前提

把hadoop的core-site.xml和hdfs-site.xml文件拷贝到flume的conf目录下

3.flume的配置apache-hdfs2.properties

a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2

# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -f /var/log/httpd/access_log
a1.sources.r1.channels = c1 c2

# Use a channel which buffers events in memory
a1.channels.c1.type = file
a1.channels.c2.type = memory

# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://bigdata.ibeifeng.com:8020/flume/pot1/%Y-%m-%d/%H
a1.sinks.k1.hdfs.rollInterval = 60
a1.sinks.k1.hdfs.rollSize = 1000000
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.minBlockReplicas = 1
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.roundValue = 1
a1.sinks.k1.hdfs.roundUnit = hour
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.channel = c1
#
a1.sinks.k2.type = hdfs
a1.sinks.k2.hdfs.path = hdfs://bigdata.ibeifeng.com:8020/flume/pot2/%Y-%m-%d/%H%M
a1.sinks.k2.hdfs.rollInterval = 60
a1.sinks.k2.hdfs.rollSize = 1000000
a1.sinks.k2.hdfs.rollCount = 0
a1.sinks.k2.hdfs.minBlockReplicas = 1
a1.sinks.k2.hdfs.round = true
a1.sinks.k2.hdfs.useLocalTimeStamp = true
a1.sinks.k2.hdfs.roundValue = 3
a1.sinks.k2.hdfs.roundUnit = minute
a1.sinks.k2.hdfs.fileType = DataStream
a1.sinks.k2.hdfs.writeFormat = Text
a1.sinks.k2.channel = c2

4.开启flume

bin/flume-ng agent --name a1 --conf  conf --conf-file conf/apache-hdfs2.properties &

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值