Spark RDD算子(五) CombineByKey

CombineByKey是Spark中用于分布式数据集聚合的关键操作,适用于处理不同分区的相同键值。它通过createCombiner初始化键的累加器,mergeValue合并单个分区内的值,以及mergeCombiners合并不同分区的累加器结果。以计算平均成绩为例,展示了如何使用Scala和Java版本实现CombineByKey的方法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

CombineByKey

聚合数据一般在集中式数据比较方便,如果涉及到分布式的数据集,可以使用combineByKey, 这个是各种聚集操作的鼻祖

def combineByKey[C](createCombiner : Function1[V, C], 
mergeValue : Function2[C, V, C], 
mergeCombiners : Function2[C, C, C]) : RDD[scala.Tuple2[K, C]]

combineByKey涉及三个方法:createCombiner、mergeValue、mergeCombiners

  • createCombiner: combineByKey() 会遍历分区中的所有元素,因此每个元素的键要么还没有遇到过,要么就和之前的某个元素的键相同。如果这是一个新的元素, combineByKey() 会使用一个叫作 createCombiner() 的函数来创建那个键对应的累加器的初始值

  • mergeValue: 如果这是一个在处理当前分区之前已经遇到的键, 它会使用 mergeValue() 方法将该键的累加器对应的当前值与这个新的值进行合并

  • mergeCombiners: 由于每个分区都是独立处理的, 因此对于同一个键可以有多个累加器。如果有两个或者更多的分区都有对应同一个键的累加器, 就需要使用用户提供的 mergeCombiners() 方法将各个分区的结果进行合并。

案例:计算平均成绩

scala版本

object CombineByKeyScala {
    //创建一个样例类
    case class ScoreDetail(stuName:String,subject:String,score:Int)
    
    def main(args: Array[String]): Unit = {
        val conf=new SparkConf().setMaster("local[2]").setAppName("actionrdddemo")
        val sc = new SparkContext(conf)
        val scores=List(
          ScoreDetail("xiaoming", "Math", 98),
          ScoreDetail("xiaoming", "English", 88),
          ScoreDetail("wangwu", "Math", 75),
          ScoreDetail("wangwu", "English", 78),
          ScoreDetail("lihua", "Math", 90),
          ScoreDetail("lihua", "English", 80),
          ScoreDetail("zhangsan", "Math", 91),
          ScoreDetail("zhangsan", "English", 80)
          )        
        val scoresWithKey =for {i<- scores} yield(i.stuName,i)
        //创建Rdd,指定三个分区,并缓存
        val scoreWithKeyRdd=sc.makeRDD(scoresWithKey).partitionBy(new HashPartitioner(3)).cache()
        //输出元组,在元组的形式上可以进行操作
        scoreWithKeyRdd.foreachPartition(partContext=>{
          partContext.foreach(x=>println(x._1,x._2.subject))   //x是每个二元组
        })
        
        //聚合求平均值
        val stuScoreInfoRdd = scoreWithKeyRdd.combineByKey(
          //三个匿名函数
          (x: ScoreDetail) => (x.score, 1),
          (acc1: (Int, Int), x: ScoreDetail) => (acc1._1 + x.score, acc1._2 + 1),
          (acc2: (Int, Int), acc3: (Int, Int)) => (acc2._1 + acc3._1, acc2._2 + acc3._2)
        )
        //方式一
        val stuAvg=stuScoreInfoRdd.map({case (key,value)=>(key,value._1/value._2)})
        //方式二
        val stuAvg2=stuScoreInfoRdd.map(x=>(x,x._2._1/x._2._2))
        stuAvg2.collect.foreach(println)
    }
}

注释:
createCombiner: (x: ScoreDetail) => (x.score, 1)
这是第一次遇到xiaoming,创建一个函数,把map中的value转成另外一个类型 ,这里是把(xiaoming,(ScoreDetail类))转换成(xiaoming,(91,1))

mergeValue: (acc: (Int, Int), x: ScoreDetail) => (acc._1 + x.score, acc._2 + 1) 再次碰到xiaoming, 就把这两个合并, 这里是将(xiaoming,(91,1)) 这种类型 和 (xiaoming,(ScoreDetail类))这种类型合并,合并成了(xiaoming,(171,2))

mergeCombiners (acc1: (Int, Int), acc2: (Int, Int)) 这个是将多个分区中的xiaoming的数据进行合并, 如果xiaoming在同一个分区,这个函数就用不上

java版本

public class CombineByKeyJava {
    public static void main(String[] args) {
        SparkConf conf = new SparkConf().setMaster("local[2]").setAppName("distinctJava");
        JavaSparkContext sc = new JavaSparkContext(conf);

        List<ScoreDetailsJava> scoreDetails=new ArrayList<>();
        scoreDetails.add(new ScoreDetailsJava("xiaoming", "Math", 98));
        scoreDetails.add(new ScoreDetailsJava("xiaoming", "English", 88));
        scoreDetails.add(new ScoreDetailsJava("wangwu", "Math", 75));
        scoreDetails.add(new ScoreDetailsJava("wangwu", "English", 78));
        scoreDetails.add(new ScoreDetailsJava("lihua", "Math", 90));
        scoreDetails.add(new ScoreDetailsJava("lihua", "English", 80));
        scoreDetails.add(new ScoreDetailsJava("zhangsan", "Math", 91));
        scoreDetails.add(new ScoreDetailsJava("zhangsan", "English", 80));

        JavaRDD<ScoreDetailsJava> scoreDetailRdd=sc.parallelize(scoreDetails);

        JavaPairRDD<String, ScoreDetailsJava> pairRdd = scoreDetailRdd.mapToPair(new PairFunction<ScoreDetailsJava, String, ScoreDetailsJava>() {
            @Override
            public Tuple2<String, ScoreDetailsJava> call(ScoreDetailsJava scoreDetailsJava) throws Exception {
                return new Tuple2<>(scoreDetailsJava.sutName, scoreDetailsJava);
            }
        });
        //createCombiner
        Function<ScoreDetailsJava, Tuple2<Integer, Integer>> createCombiner = new Function<ScoreDetailsJava, Tuple2<Integer, Integer>>() {
            @Override
            public Tuple2<Integer, Integer> call(ScoreDetailsJava s) throws Exception {
                return new Tuple2<>(s.score,1);
            }
        };

        //mergeValue
        Function2<Tuple2<Integer,Integer>, ScoreDetailsJava,Tuple2<Integer,Integer>> mergeValue
         = new Function2<Tuple2<Integer, Integer>, ScoreDetailsJava,Tuple2<Integer, Integer>>() {
            @Override
            public Tuple2<Integer, Integer> call(Tuple2<Integer, Integer> t, ScoreDetailsJava s) throws Exception {
                return new Tuple2<>(s.score+t._1,t._2+1);
            }
        };

        //mergeCombiners
        Function2<Tuple2<Integer, Integer>, Tuple2<Integer, Integer>, Tuple2<Integer, Integer>> mergeCombiners = new Function2<Tuple2<Integer, Integer>, Tuple2<Integer, Integer>, Tuple2<Integer, Integer>>() {
            @Override
            Tuple2<Integer, Integer> call(Tuple2<Integer, Integer> t1, Tuple2<Integer, Integer> t2) throws Exception {
                return new Tuple2<>(t1._1 + t2._1, t1._2 + t2._2);
            }
        };

        JavaPairRDD<String, Tuple2<Integer, Integer>> stringTuple2JavaPairRdd = pairRdd.combineByKey(createCombiner, mergeValue, mergeCombiners);
        List<Tuple2<String,Tuple2<Integer,Integer>>> collect=stringTuple2JavaPairRdd.collect();
        for (Tuple2<String, Tuple2<Integer, Integer>> t2:collect){
            System.out.println(t2._1+","+t2._2._1/t2._2._2);
        }
    }
}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值