PairRDD
spark为包含键值对类型的RDD提供了一些专有的操作。这些RDD被称为pairRDD。pairRDD是很多程序的构成要素,因为它们提供了并行操作各个键或跨节点重新进行数据分组的操作接口。
- 创建pair RDD
需要把一个普通的RDD转换为pairRDD时,可以调用map函数实现,例如:
scala使用第一个单词作为键创建一个PairRDD
val lines= sc.parallelize(List("i like scala","spark is very good"),3)
val pairRDD = lines.map(line=>(line.split(" ")(0),line))
- pairRDD的转化操作
(1)reduceByKey():合并具有相同键的值
scala> val rdd1=sc.parallelize(List(-1,-2,-3,0,1,2))
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[3] at parallelize at <console>:21
scala> val prdd = rdd1.map(x=>(x,x*x))
prdd: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[4] at map at <console>:23
scala> val result=prdd.map(x=>(x._2,x._1))
result: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[6] at map at <console>:25
scala> val result1=result.reduceByKey(_+_).collect
result1: Array[(Int, Int)] = Array((4,0), (0,0), (1,0), (9,-3))
(2)groupByKey对具有相同键的值进行分组
scala> val rdd1=sc.parallelize(List(-1,-2,-3,0,1,2))
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:21
scala> val prdd = rdd1.map(x=>(x,x*x))
prdd: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[1] at map at <console>:23
scala> val result=prdd.map(x=>(x._2,x._1))
result: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[2] at map at <console>:25
scala> val groupresult = result.groupByKey()
groupresult: org.apache.spark.rdd.RDD[(Int, Iterable[Int])] = ShuffledRDD[3] at groupByKey at <console>:27
res0: Array[(Int, Iterable[Int])] = Array((4,CompactBuffer(-2, 2)), (0,CompactBuffer(0)), (1,CompactBuffer(-1, 1)), (9,CompactBuffer(-3)))
(3) mapValues():对pairRDD中的每个值应用一个函数不改变键
scala> val prdd1 = prdd.mapValues(x=>x+1)
prdd1: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[4] at mapValues at <console>:25
scala> prdd1.collect
res1: Array[(Int, Int)] = Array((-1,2), (-2,5), (-3,10), (0,1), (1,2), (2,5))
flatMapValues():对pairRDD中的每个值应用一个返回迭代器的函数,然后对每一元素都生成一个对应元键的键值对记录。
(4)sortByKey:返回一个根据键排序的RDD
scala> val sort = prdd.sortByKey()
sort.collect
res1: Array[(Int, Int)] = Array((-3,9), (-2,4), (-1,1), (0,0), (1,1), (2,4))
- 针对两个pairRDD的转化操作
(1)substractByKey:删掉RDD中键与otherRDD中的键相同的元素
scala> val rdd1=sc.parallelize(List(-1,-2,-3,0,1,2))
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:21
scala> val prdd = rdd1.map(x=>(x,x*x))
prdd: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[1] at map at <console>:23
scala> val frdd = prdd.map(x=>(x._2,x._1))
frdd: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[2] at map at <console>:25
scala> val sortk = frdd.subtractByKey(prdd)
sortk: org.apache.spark.rdd.RDD[(Int, Int)] = SubtractedRDD[3] at subtractByKey at <console>:27
scala> sortk.collect
res0: Array[(Int, Int)] = Array((4,-2), (4,2), (9,-3))
- join操作:对两个Rdd进行内连接