RDD

弹性数据集,就是个逻辑上的数据集

算子

  1. Transformation (Lazy)
    map
    flatMap
    groupByKey
    reduceByKey
    psersist
    cache

  2. Action:
    Reduce
    collect
    saveAsTextFile

窄依赖: 不用做shuffle
宽依赖: 需要做shuffle

stage划分: 遇到宽依赖shuffle就停止,然后划分一个stage,

SparkStreaming
val host = args(0)
var port = args(1)
val output = args(2)

val config = new SparkConf().setpackage spark.example

import org.apache.spark.{HashPartitioner, SparkConf}
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.{Seconds, StreamingContext}

object StreamingWordCount {

def main(args: Array[String]): Unit = {

val host = args(0)
var port = args(1)
val output = args(2)

val config = new SparkConf().setAppName("StreamingWordCount")
val ssc = new StreamingContext(config, Seconds(60))

val updateFunc = (it: Iterator[(String, Seq[Int], Option[Int])]) => {
  it.map(t => (t._1, t._2.sum + t._3.getOrElse(0)))
}

// Get DStream
val lines = ssc.socketTextStream(host, port.toInt, StorageLevel.MEMORY_AND_DISK_SER)
var words = lines.flatMap(x => x.split(' ')).persist(StorageLevel.MEMORY_AND_DISK_SER_2)

ssc.checkpoint("hdfs://master:9000/checkpoint")
val wordCount = words.map(x => (x, 1)).reduceByKey(_ + _).updateStateByKey(updateFunc, new HashPartitioner(ssc.sparkContext.defaultParallelism), true)
wordCount.print()
wordCount.saveAsTextFiles(output, "wordcount_updateStateByKey")

// 9:00 word1: 10 total -> f1.txt 文件1
// 9:05 (2 ge) word2: 12 total -> f2.txt 小文件2


// hdfs://.../result/文件1 a:10
// hdfs://.../result/文件2 a:12
// hdfs://.../result/文件3 a:15
// hdfs://.../result/文件4 a:25
// hdfs://.../result/文件5 a:30



// f1 a,10
// f2 a,12

// foreachRDD
/*
words.foreachRDD(rdd => {
  val wordCount1 = rdd.map(x => (x, 1)).reduceByKey(_ + _)
})*/

ssc.start()
ssc.awaitTermination()

}
}

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值