SparkStreaming(9):实例-Streaming整合Spark SQL,进行wordcount功能

本文介绍如何结合Spark Streaming和Spark SQL进行实时词频统计,利用DStream和RDD交互操作,通过foreachRDD API将实时数据转换为DataFrame并执行SQL查询。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1.功能实现

     综合Spark Streaming和Spark SQL,进行word count的统计。核心理解DStream和RDD相互操作,需要通过使用foreachRDD这个API。

2.代码

package Spark

import org.apache.spark.SparkConf
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession
import org.apache.spark.streaming.{Seconds, StreamingContext, Time}

/**
  * spark streaming 整合spark sql完成词频统计操作
  * https://github.com/apache/spark/blob/v2.1.0/examples/src/main/scala/org/apache/spark/examples/streaming/SqlNetworkWordCount.scala
  */
object SqlNetworkWordCount {
  def main(args: Array[String]): Unit = {
    val sparkConf=new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")
    /***
      * 创建StreamingContext需要sparkConf和batch interval
      */
    val ssc=new StreamingContext(sparkConf,Seconds(5))

    val lines = ssc.socketTextStream("bigdata.ibeifeng.com", 6789)
    val words = lines.flatMap(_.split(" "))

    // Convert RDDs of the words DStream to DataFrame and run SQL query
    words.foreachRDD { (rdd: RDD[String], time: Time) =>
      // Get the singleton instance of SparkSession
      val spark = SparkSessionSingleton.getInstance(rdd.sparkContext.getConf)
      import spark.implicits._

      // Convert RDD[String] to RDD[case class] to DataFrame
      val wordsDataFrame = rdd.map(w => Record(w)).toDF()

      // Creates a temporary view using the DataFrame
      wordsDataFrame.createOrReplaceTempView("words")

      // Do word count on table using SQL and print it
      val wordCountsDataFrame =
      spark.sql("select word, count(*) as total from words group by word")
      println(s"========= $time =========")
      wordCountsDataFrame.show()
    }

    ssc.start()
    ssc.awaitTermination()
  }

  /** Case class for converting RDD to DataFrame */
  case class Record(word: String)

  /** Lazily instantiated singleton instance of SparkSession */
  object SparkSessionSingleton {

    @transient  private var instance: SparkSession = _

    def getInstance(sparkConf: SparkConf): SparkSession = {
      if (instance == null) {
        instance = SparkSession
          .builder
          .config(sparkConf)
          .getOrCreate()
      }
      instance
    }
  }

}

3.测试

(1)打开nc输入

[root@bigdata hadoop-2.7.3]#  nc -lk 6789
20180808,ww
20180808,ww
20180808,ww

(2)结果:

========= 1537287530000 ms =========
+-----------+-----+
|       word|total|
+-----------+-----+
|20180808,ww|    3|
+-----------+-----+

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值