kafka—flume—consumer

以下一切数据都依据本人虚拟机的真实路径数据

日志文件的路径

/root/data/flume/prolog.log

创建kafka主题

kafka-topics.sh --create --topic prolog_02 --partitions 1 --replication-factor 1 --bootstrap-server singlebrown:9092

创建flume配置文件

vim /root/flume_job/logconf/flume02_kafka.conf
a1.sources = s1
a1.channels = c1
a1.sinks = k1

a1.sources.s1.type = taildir
a1.sources.s1.filegroups = f1
a1.sources.s1.filegroups.f1 = /root/data/flume/prolog.log
a1.sources.s1.positionFile = /opt/software/flume190/data/taildir/tail_prolog_02.json
a1.sources.s1.batchSize = 10

a1.channels.c1.type = file
a1.channels.c1.checkpointDir = /opt/software/flume190/mydata/checkpoint04
a1.channels.c1.dataDirs = /opt/software/flume190/mydata/data
a1.channels.c1.capacity = 1000000
a1.channels.c1.transactionCapacity = 100

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.bootstrap.servers = singlebrown:9092
a1.sinks.k1.kafka.topic = prolog_02
a1.sinks.k1.kafka.flumeBatchSize = 10
a1.sinks.k1.kafka.producer.batchSize = 10
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 500

a1.sources.s1.channels = c1
a1.sinks.k1.channel = c1

启动kafka消费者控制台

kafka-console-consumer.sh --bootstrap-server singlebrown:9092 --from-beginning --topic prolog_02 --property print.key=true --key-deserializer org.apache.kafka.common.serialization.LongDeserializer --value-deserializer org.apache.kafka.common.serialization.IntegerDeserializer

启动flume

cd /opt/software/flume190/
flume-ng agent -n a1 -c /opt/software/flume190/conf/ -f /root/flume_job/logconf/flume02_kafka.conf -Dflume.root.logger=INFO,console
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值