来谈论下sparkRDD中的join和cogroup 这两个算子的区别
- join就是简单的吧连个RDD按照相同的key给拼在一起,能匹配上多少个就给你拼多少个,代码:
private static void join() {
// 创建SparkConf
SparkConf conf = new SparkConf()
.setAppName("join")
.setMaster("local");
// 创建JavaSparkContext
JavaSparkContext sc = new JavaSparkContext(conf);
sc.setLogLevel("WARN");
// 模拟集合
List<Tuple2<Integer, String>> studentList = Arrays.asList(
new Tuple2<Integer, String>(1, "leo"),
new Tuple2<Integer, String>(2, "jack"),
new Tuple2<Integer, String>(3, "tom"));
List<Tuple2<Integer, Integer>> scoreList = Arrays.asList(
// new Tuple2<Integer, Integer>(1, 100),
// new Tuple2<Integer, Integer>(2, 90),
// new Tuple2<Integer, Integer>(3, 60));
new Tuple2<Integer, Integer>(1, 100),
new Tuple2<Integer, Integer>(2, 90),
new Tuple2<Integer, Integer>(3, 60),
new Tuple2<Integer, Integer>(1, 70),
new Tuple2<Integer, Integer>(2, 80),
new Tuple2<Integer, Integer>(3, 50));
// 并行化两个RDD
JavaPairRDD<Integer, String> students = sc.parallelizePairs(studentList);
JavaPairRDD<Integer, Integer> scores = sc.parallelizePairs(scoreList);
// 使用join算子关联两个RDD
// join以后,还是会根据key进行join,并返回JavaPairRDD
// 但是JavaPairRDD的第一个泛型类型,之前两个JavaPairRDD的key的类型,因为是通过key进行join的
// 第二个泛型类型,是Tuple2<v1, v2>的类型,Tuple2的两个泛型分别为原始RDD的value的类型
// join,就返回的RDD的每一个元素,就是通过key join上的一个pair
// 什么意思呢?比如有(1, 1) (1, 2) (1, 3)的一个RDD
// 还有一个(1, 4) (2, 1) (2, 2)的一个RDD
// 如果是cogroup的话,会是(1,((1,2,3),(4)))
// join以后,实际上会得到(1 (1, 4)) (1, (2, 4)) (1, (3, 4))
JavaPairRDD<Integer, Tuple2<String, Integer>> studentScores = students.join(scores);
// 打印studnetScores RDD
studentScores.foreach(
new VoidFunction<Tuple2<Integer,Tuple2<String,Integer>>>() {
private static final long serialVersionUID = 1L;
@Override
public void call(Tuple2<Integer, Tuple2<String, Integer>> t)
throws Exception {
System.out.println("student id: " + t._1);
System.out.println("student name: " + t._2._1);
System.out.println("student score: " + t._2._2);
System.out.println("===============================");
}
});
// 关闭JavaSparkContext
sc.close();
}
结果:
student id: 1
student name: leo
student score: 100
===============================
student id: 1
student name: leo
student score: 70
===============================
student id: 3
student name: tom
student score: 60
===============================
student id: 3
student name: tom
student score: 50
===============================
student id: 2
student name: jack
student score: 90
===============================
student id: 2
student name: jack
student score: 80
===============================
cogroup算子,它是把两个RDD按照key拼起来,但是它会汇总得到的value,最后的结果是他的条数是根据key决定的,有多少key就汇总成多少条数据,然后把RDD1的所有相同key的value放到一个Iterable里面,同理处理RDD2,代码:
private static void cogroup() {
// 创建SparkConf
SparkConf conf = new SparkConf()
.setAppName("cogroup")
.setMaster("local");
// 创建JavaSparkContext
JavaSparkContext sc = new JavaSparkContext(conf);
sc.setLogLevel("WARN");
// 模拟集合
List<Tuple2<Integer, String>> studentList = Arrays.asList(
new Tuple2<Integer, String>(1, "leo"),
new Tuple2<Integer, String>(2, "jack"),
new Tuple2<Integer, String>(3, "tom"));
List<Tuple2<Integer, Integer>> scoreList = Arrays.asList(
new Tuple2<Integer, Integer>(1, 100),
new Tuple2<Integer, Integer>(2, 90),
new Tuple2<Integer, Integer>(3, 60),
new Tuple2<Integer, Integer>(1, 70),
new Tuple2<Integer, Integer>(2, 80),
new Tuple2<Integer, Integer>(3, 50));
// 并行化两个RDD
JavaPairRDD<Integer, String> students = sc.parallelizePairs(studentList);
JavaPairRDD<Integer, Integer> scores = sc.parallelizePairs(scoreList);
// cogroup与join不同
// 相当于是,一个key join上的所有value,都给放到一个Iterable里面去了
// cogroup,不太好讲解,希望大家通过动手编写我们的案例,仔细体会其中的奥妙
JavaPairRDD<Integer, Tuple2<Iterable<String>, Iterable<Integer>>> studentScores =
students.cogroup(scores);
// 打印studnetScores RDD
studentScores.foreach(
new VoidFunction<Tuple2<Integer,Tuple2<Iterable<String>,Iterable<Integer>>>>() {
private static final long serialVersionUID = 1L;
@Override
public void call(
Tuple2<Integer, Tuple2<Iterable<String>, Iterable<Integer>>> t)
throws Exception {
System.out.println("student id: " + t._1);
System.out.println("student name: " + t._2._1);
System.out.println("student score: " + t._2._2);
System.out.println("===============================");
}
});
// 关闭JavaSparkContext
sc.close();
}
结果:
student id: 1
student name: [leo]
student score: [100, 70]
===============================
student id: 3
student name: [tom]
student score: [60, 50]
===============================
student id: 2
student name: [jack]
student score: [90, 80]
===============================
有任何问题,欢迎加扫码咨询(备注好对应的技术呦)