# 指定Agent的组件名称
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# 指定Flume source(要监听的路径)
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /opt/dir
#每行读取的大小限制
a1.sources.r1.deserializer.maxLineLength = 51200000
# 指定Flume sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = topic-test
a1.sinks.k1.brokerList = 192.168.31.244:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 100
#kafka消息生产的大小限制
a1.sinks.k1.kafka.producer.max.request.size=51200000
# 指定Flume channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# 绑定source和sink到channel上
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
flume kafka sink
最新推荐文章于 2023-12-18 16:18:10 发布
本文详细介绍了如何使用Flume从指定目录收集数据,并配置数据源、通道和接收器,最终将数据发送至Kafka主题。具体涵盖了监听路径设置、数据大小限制、Kafka生产者配置、内存通道容量及交易容量设定等关键步骤。
1943

被折叠的 条评论
为什么被折叠?



