Spark Streaming接收kafka数据,输出到HBase

该博客详细介绍了如何使用Spark Streaming从Kafka获取数据,通过SparkSQL进行wordcount和topN处理,然后将结果存储到HBase。内容包括Kafka生产者模拟数据生成,Spark Streaming的配置,数据写入HBase的两种方式,以及程序的运行过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

需求

Kafka + SparkStreaming + SparkSQL + HBase
输出TOP5的排名结果
排名作为Rowkey,word和count作为Column

实现

创建kafka生产者模拟随机生产数据

object producer {
  def main(args: Array[String]): Unit = {
    val topic ="words"
    val brokers ="master:9092,slave1:9092,slave2:9092"
    val prop=new Properties()
    prop.put("metadata.broker.list",brokers)
    prop.put("serializer.class", "kafka.serializer.StringEncoder")

    val kafkaConfig=new ProducerConfig(prop)
    val producer=new Producer[String,String](kafkaConfig)

    val content:Array[String]=new Array[String](5)
    content(0)="kafka kafka produce"
    content(1)="kafka produce message"
    content(2)="hello world hello"
    content(3)="wordcount topK topK"
    content(4)="hbase spark kafka"
    while (true){
      val i=(math.random*5).toInt
      producer.send(new KeyedMessage[String,String](topic,content(i)))
      println(content(i))
      Thread.sleep(200)
    }
  }
}

创建spark streaming

val conf = new SparkConf().setMaster("local[2]").setAppName("Networkcount")
    val sc = new SparkContext(conf)
    val ssc = new StreamingContext(sc, Seconds(1))

配置kafka,通过KafkaUtils.createDirectStream读取kafka传递过来的数据


                
About This Book Explore the integration of Apache Spark with third party applications such as H20, Databricks and Titan Evaluate how Cassandra and Hbase can be used for storage An advanced guide with a combination of instructions and practical examples to extend the most up-to date Spark functionalities Who This Book Is For If you are a developer with some experience with Spark and want to strengthen your knowledge of how to get around in the world of Spark, then this book is ideal for you. Basic knowledge of Linux, Hadoop and Spark is assumed. Reasonable knowledge of Scala is expected. What You Will Learn Extend the tools available for processing and storage Examine clustering and classification using MLlib Discover Spark stream processing via Flume, HDFS Create a schema in Spark SQL, and learn how a Spark schema can be populated with data Study Spark based graph processing using Spark GraphX Combine Spark with H20 and deep learning and learn why it is useful Evaluate how graph storage works with Apache Spark, Titan, HBase and Cassandra Use Apache Spark in the cloud with Databricks and AWS In Detail Apache Spark is an in-memory cluster based parallel processing system that provides a wide range of functionality like graph processing, machine learning, stream processing and SQL. It operates at unprecedented speeds, is easy to use and offers a rich set of data transformations. This book aims to take your limited knowledge of Spark to the next level by teaching you how to expand Spark functionality. The book commences with an overview of the Spark eco-system. You will learn how to use MLlib to create a fully working neural net for handwriting recognition. You will then discover how stream processing can be tuned for optimal performance and to ensure parallel processing. The book extends to show how to incorporate H20 for machine learning, Titan for graph based storage, Databricks for cloud-based Spark. Intermediate Scala based code examples are provided for Apache Spark module processing in a CentOS Linux and Databricks cloud environment. Table of Contents Chapter 1: Apache Spark Chapter 2: Apache Spark Mllib Chapter 3: Apache Spark Streaming Chapter 4: Apache Spark Sql Chapter 5: Apache Spark Graphx Chapter 6: Graph-Based Storage Chapter 7: Extending Spark With H2O Chapter 8: Spark Databricks Chapter 9: Databricks Visualization
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值