基本概念
agent 一个完整的flume程序,它包含了采集(source),缓存(channel),写入(sink)
source 数据的来源
channel 缓存通道,联通source和sink
sink 写入到数据仓库或者下一个agent
安装
然后解压 tar -zxvf apache-flume-1.6.0-bin.tar.gz
然后进入flume的目录,修改conf下的flume-env.sh,在里面配置JAVA_HOME
bin/flume-ng agent -c conf/ -f dir-hdfs.conf -n ag1
扫描文件夹,如果有新文件生成则采集
#定义三大组件的名称
ag1.sources = source1
ag1.sinks = sink1
ag1.channels = channel1
# 配置source组件
# 读取目录,
ag1.sources.source1.type = spooldir
# 设置所读取的目录
ag1.sources.source1.spoolDir = /root/log/
# 采集后给文件添加后缀
ag1.sources.source1.fileSuffix=.FINISHED
ag1.sources.source1.deserializer.maxLineLength=5120
# 配置sink组件
ag1.sinks.sink1.type = hdfs
#每隔round分钟新建一个目录
ag1.sinks.sink1.hdfs.path =hdfs://ip1:9000/access_log/%y-%m-%d/%H-%M
# 生成文件的前缀
ag1.sinks.sink1.hdfs.filePrefix = app_log
# 生成文件的后缀
ag1.sinks.sink1.hdfs.fileSuffix = .log
# 100条写一次hdfs
ag1.sinks.sink1.hdfs.batchSize= 100
# 输入什么就输出什么
ag1.sinks.sink1.hdfs.fileType = DataStream
#字符串形式
ag1.sinks.sink1.hdfs.writeFormat =Text
## roll:滚动切换:控制写文件的切换规则
# 按文件体积(字节)来切
ag1.sinks.sink1.hdfs.rollSize = 512000
# 按event条数切
ag1.sinks.sink1.hdfs.rollCount = 1000000
# 按时间间隔切换文件
ag1.sinks.sink1.hdfs.rollInterval = 60
## 控制生成目录的规则
# 切换目录
ag1.sinks.sink1.hdfs.round = true
ag1.sinks.sink1.hdfs.roundValue = 10
ag1.sinks.sink1.hdfs.roundUnit = minute
ag1.sinks.sink1.hdfs.useLocalTimeStamp = true
# channel组件配置
ag1.channels.channel1.type = memory
## event条数
ag1.channels.channel1.capacity = 100000
##flume事务控制所需要的缓存容量600条event
ag1.channels.channel1.transactionCapacity = 600
# 绑定source、channel和sink之间的连接
ag1.sources.source1.channels = channel1
ag1.sinks.sink1.channel = channel1
扫描文件,如果文件有添加则采集
本质上是读取tail返回的
# 配置source组件
# 读取目录,
ag1.sources.source1.type = exec
# 设置所读取的目录
ag1.sources.source1.command = tail -F /root/log
# 采集后给文件添加后缀
串联
bin/flume-ng agent -c conf -f conf/avro-m-log.conf -n a1 -Dflume.root.logger=INFO,console
bin/flume-ng avro-client -H localhost -p 4141 -F log.10
上一级(客户端)
从tail命令获取数据发送到avro端口
另一个节点可配置一个avro源来中继数据,发送外部存储
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /root/log
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hdp-05
a1.sinks.k1.port = 4141
a1.sinks.k1.batch-size = 2
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
下一级(服务端)
从avro端口接收数据,下沉到hdfs
采集配置文件,avro-hdfs.conf
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
##source中的avro组件是一个接收者服务
a1.sources.r1.type = avro
a1.sources.r1.bind = hdp-05
a1.sources.r1.port = 4141
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = /flume/taildata/%y-%m-%d/
a1.sinks.k1.hdfs.filePrefix = tail-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 24
a1.sinks.k1.hdfs.roundUnit = hour
a1.sinks.k1.hdfs.rollInterval = 0
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.rollCount = 50
a1.sinks.k1.hdfs.batchSize = 10
a1.sinks.k1.hdfs.useLocalTimeStamp = true
#生成的文件类型,默认是Sequencefile,可用DataStream,则为普通文本
a1.sinks.k1.hdfs.fileType = DataStream
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
发送数据: