在实际项目中,为了减轻一次性向hdfs上写数据,往往采用分层写入的功能,以减少负载,负载的分层拓扑图如下:
选择四台机器,一台作为agent1,负责收集数据进行多sink传输数据,三台作为分层负载。
agent1的配置:
a1.sources=r1
a1.sinks=k1 k2 k3
a1.channels=c1
#source
a1.sources.r1.type=exec
a1.sources.r1.command=tail -F /usr/test/a.txt
#sink group
a1.sinkgroups=g1
a1.sinkgroups.g1.sinks=k1 k2 k3
a1.sinkgroups.g1.processor.type=load_balance
a1.sinkgroups.g1.processor.backoff=true
#设置为轮询调度
a1.sinkgroups.g1.processor.selector=round_robin
#sink1
a1.sinks.k1.type=avro
a1.sinks.k1.hostname=192.168.236.104
a1.sinks.k1.port=41414
#sink2
a1.sinks.k2.type=avro
a1.sinks.k2.hostname=192.168.236.105
a1.sinks.k2.port=41414
#sink3
a1.sinks.k3.type=avro
a1.sinks.k3.hostname=192.168.236.106
a1.sinks.k3.port=41414
#channel
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
#bind
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
a1.sinks.k2.channel=c1
a1.sinks.k3.channel=c1
agent2-4的配置相同:
a1.sources=r1
a1.channels=c1
a1.sinks=k1
#source
a1.sources.r1.type=avro
a1.sources.r1.bind=0.0.0.0
a1.sources.r1.port=41414
#channels
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
a1.sources.r1.interceptors=i1
a1.sources.r1.interceptors.i1.type=org.apache.flume.interceptor.TimestampInterceptor$Builder
#sinks
a1.sinks.k1.type=hdfs
a1.sinks.k1.hdfs.path=hdfs://ns1/flume/%y%m%d
a1.sinks.k1.hdfs.filePrefix=events-
a1.sinks.k1.hdfs.fileType=DataStream
a1.sinks.k1.hdfs.rollCount=0
a1.sinks.k1.hdfs.rollSize=134217728
a1.sinks.k1.hdfs.rollInterval=60
a1.sinks.k1.hdfs.writeFormat=Text
a1.sinks.k1.hdfs.useLocalTimeStamp=true
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
配置完成后,经实验发现在a.txt中写数据,由于在设置了60s为一个文件,所以文件较多,截图如下:
注意要先起agent2-4,不然agent1会报端口找不到,切换到flume目录,启动命令:
/bin/flume-ng agent -n a1 -c conf/ -f conf/a1.conf -Dflume.root.logger=INFO,console
各个flume之间采用avro rpc通信,记得绑定1000以上的端口号。