参考:http://spark.apache.org/docs/1.6.3/streaming-flume-integration.html
https://blog.youkuaiyun.com/weixin_41615494/article/details/79521120
flume作为日志实时采集的框架,可以与SparkStreaming实时处理框进行对接,flume实时产生数据,sparkStreaming做实时处理。Spark Streaming对接FlumeNG有两种方式,
一种是FlumeNG将消息Push推给Spark Streaming,
还有一种是Spark Streaming从flume 中Poll拉取数据。
Push方式----->先执行spark代码,再执行flume--->只可以设置一个主机和端口
(1)编写flume-push.conf配置文件
# Name the components on this agent
#push mode
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# source
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /export/data/flume
a1.sources.r1.fileHeader = true
# Describe the sink
a1.sinks.k1.type = avro
#这是接收方
a1.sinks.k1.hostname = 192.168.31.172
a1.sinks.k1.port = 8888
a1.sinks.k1.batchSize= 2000
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 5000
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
注意配置文件中指明的hostname和port是spark应用程序所在服务器的ip地址和端口。
启动flume: 先执行spark代码,再执行flume
bin/flume-ng agent -n a1 -c conf/ -f conf/flume-push-spark.conf -Dflume.root.logger=INFO,console
spark
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.flume.{FlumeUtils, SparkFlumeEvent}
import org.apache.spark.streaming.{Seconds, StreamingContext}
/**
* Created by ZX on 2015/6/22.
*/
object FlumeWordCount_Push {
def main(args: Array[String]) {
val host = args(0)
val port = args(1).toInt
val conf: SparkConf = new SparkConf()
.setAppName("SparkStreamingFlume_Push")
.setMaster("local[2]")
//2、创建sparkContext
val sc: SparkContext = new SparkContext(conf)
sc.setLogLevel("WARN")
//3、创建StreamingContext
val ssc = new StreamingContext(sc, Seconds(5))
//4、获取flume中的数据
val stream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils.createStream(ssc, host, port)
//5、从Dstream中获取flume中的数据 {"header":xxxxx "body":xxxxxx}
val lineDstream: DStream[String] = stream.map(x => new String(x.event.getBody.array()))
//6、切分每一行,每个单词计为1
val wordAndOne: DStream[(String, Int)] = lineDstream.flatMap(_.split(" ")).map((_, 1))
//7、相同单词出现的次数累加
val result: DStream[(String, Int)] = wordAndOne.reduceByKey(_ + _)
//8、打印输出
result.print()
//开启计算
ssc.start()
ssc.awaitTermination()
}
}
(3) 启动执行
先执行spark代码,再执行flume
2.poll[拉取]方式===>先启动flume再启动SparkStreaming
可以同时接受多个flume的数据
(1)安装flume1.6以上
(2)下载依赖包
spark-streaming-flume-sink_2.11-2.0.2.jar放入到flume的lib目录下
(3)写flume的agent,注意既然是拉取的方式,那么flume向自己所在的机器上产数据就行
(4)编写flume-poll.conf配置文件
配置flume
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# source
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /export/data/flume
a1.sources.r1.fileHeader = true
# Describe the sink
a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink ####
a1.sinks.k1.hostname = master #### <hostname of the local machine>
a1.sinks.k1.port = 8888 #####
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
import java.net.InetSocketAddress
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.streaming.flume.{FlumeUtils, SparkFlumeEvent}
//todo:利用sparkStreaming对接flume数据,实现单词计算------Poll拉模式
object SparkStreamingFlume_Poll {
def main(args: Array[String]): Unit = {
//1、创建 SparkConf
val sparkConf: SparkConf = new SparkConf()
.setAppName("SparkStreamingFlume_Poll")
.setMaster("local[2]")
//2、创建sparkContext
val sc = new SparkContext(sparkConf)
sc.setLogLevel("WARN")
//3、创建StreamingContext
val ssc = new StreamingContext(sc, Seconds(5))
//定义一个flume地址集合,可以同时接受多个flume的数据
val address = Seq(new InetSocketAddress("192.168.216.120", 9999), new InetSocketAddress("192.168.216.121", 9999))
//4、获取flume中数据
val stream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils.createPollingStream(ssc, address, StorageLevel.MEMORY_AND_DISK_SER_2)
//5、从Dstream中获取flume中的数据 {"header":xxxxx "body":xxxxxx}
val lineDstream: DStream[String] = stream.map(x => new String(x.event.getBody.array()))
//6、切分每一行,每个单词计为1
val wordAndOne: DStream[(String, Int)] = lineDstream.flatMap(_.split(" ")).map((_, 1))
//7、相同单词出现的次数累加
val result: DStream[(String, Int)] = wordAndOne.reduceByKey(_ + _)
//8、打印输出
result.print()
//开启计算
ssc.start()
ssc.awaitTermination()
}
}