阿里云日志服务 + Flume + Kafka + Spark Streaming--part two

上次成功的通过Flume收集到了阿里云日志服务的日志,今天就把从Flume从阿里云日志服务采集过来的日志sink到Kafka

启动Kafka

[hadoop@hadoop004 kafka_2.11-0.10.0.0]$ nohup bin/kafka-server-start.sh config/server.properties  >/dev/null 2>&1 &
[1] 25052

Kafka创建topic 

[hadoop@hadoop004 kafka_2.11-0.10.0.0]$ bin/kafka-topics.sh \
> --create \
> --zookeeper hadoop004:2181/kafka  \
> --replication-factor 1 \
> --partitions 3 \
> --topic sls
Created topic "sls".

[hadoop@hadoop004 kafka_2.11-0.10.0.0]$ bin/kafka-topics.sh \
> --list \
> --zookeeper hadoop004:2181/kafka
sls

启动Kafka的consumer

[hadoop@hadoop004 kafka_2.11-0.10.0.0]$ bin/kafka-console-consumer.sh \
> --zookeeper hadoop004:2181/kafka  \
> --topic sls \
> -from-beginning

配置agent文件

sls-flume-kafka.sources = sls-source
sls-flume-kafka.channels = sls-memory-channel
sls-flume-kafka.sinks = kafka-sink

sls-flume-kafka.sources.sls-source.type = com.aliyun.loghub.flume.source.LoghubSource
sls-flume-kafka.sources.sls-source.endpoint = cn-shenzhen.log.aliyuncs.com
sls-flume-kafka.sources.sls-source.project = <Your Loghub project>
sls-flume-kafka.sources.sls-source.logstore = <Your Loghub logstore>
sls-flume-kafka.sources.sls-source.accessKeyId = <Your Accesss Key Id>
sls-flume-kafka.sources.sls-source.accessKey = <Your Access Key>
sls-flume-kafka.sources.sls-source.deserializer = JSON
sls-flume-kafka.sources.sls-source.sourceAsField = true
sls-flume-kafka.sources.sls-source.timeAsField = true
sls-flume-kafka.sources.sls-source.topicAsField = true
sls-flume-kafka.sources.sls-source.fetchInOrder = true
sls-flume-kafka.sources.sls-source.initialPosition = timestamp
sls-flume-kafka.sources.sls-source.timestamp = 1562299808

sls-flume-kafka.channels.sls-memory-channel.type = memory
sls-flume-kafka.channels.sls-memory-channel.capacity = 20000
sls-flume-kafka.channels.sls-memory-channel.transactionCapacity = 100

#sls-flume-kafka.sinks.kafka-sink.type = logger
sls-flume-kafka.sinks.kafka-sink.type = org.apache.flume.sink.kafka.KafkaSink
sls-flume-kafka.sinks.kafka-sink.topic = sls
sls-flume-kafka.sinks.kafka-sink.brokerList = hadoop004:9092
sls-flume-kafka.sinks.kafka-sink.requiredAcks = 1
sls-flume-kafka.sinks.kafka-sink.batchSize = 20

sls-flume-kafka.sources.sls-source.channels = sls-memory-channel
sls-flume-kafka.sinks.kafka-sink.channel = sls-memory-channel

启动Flume

[hadoop@hadoop004 bin]$ ./flume-ng agent --name sls-flume-kafka --conf /data/aaron/app/apache-flume-1.6.0-cdh5.7.0-bin/conf/conffile --conf-file /data/aaron/app/apache-flume-1.6.0-cdh5.7.0-bin/conf/conffile/sls-flume.conf -Dflume.root.logger=INFO,console

稍等片刻。。。

成功了,Kafka的消费端接收到阿里云日志服务的日志了

下一步是使用Spark Streaming来消费,敬请期待!!!

 

 

 

 

 

 

 

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值