Spark-core-问题记录:join shuffle

本文探讨了Spark中partitionBy、persist等高级用法及其对shuffle的影响,解析了join与zipPartitions的区别,阐述了如何通过合理配置减少shuffle量,以及localCheckpoint的作用。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1、partitionBy:当hashCode为负时,抛异常:java.lang.ArrayIndexOutOfBoundsException

        at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:151)

2、rdd.partitionBy(new TdidPartitioner(10)).mapPartitionsWithIndex {

  case (idx, value) =>

    Iterator.single(idx, value)

}

        A。返回值为Iterator中的value为Iterator,抛异常:NotSerializableException

                - object not serializable (class: org.apache.spark.InterruptibleIterator, value: non-empty iterator)

        B。把value改为Iterator.single(idx, value.toSet)

3、join于zipPartitions区别

        tmpRdd.join(bloomFilterRdd)、tmpRdd.zipPartitions(bloomFilterRdd)

的区别:

        A。join会先计算左边的rdd,然后计算右边的rdd,根据key join

        B。zipPartitions,根据key,一个一个进行匹配

4、persist、以及导致shuffle增长的原因(streaming)

        val tmpRDD = Analysis.analyserJoin(tdidRDD, filterRDD).persist()

        filterRDD.unpersist()

        filterRDD = tmpRDD

        filterRDD.count()    *** tmpRDD.count() ***

        tmpRDD.unpersist()

        问题描述:

                循环使用filterRDD,tdidRDD每次都是不同的数据

        A。在调用action操作之前,先调用unpersist又调用persist,算子会先执行上次操作的计算,因为unpersist把之前的计算释放,因为所有的计算在调用action操作时,才会真正的计算,unpersist则会把RDD标记为不需要persist,并且释放block块

        B。如果是tmpRDD.count(),也会执行上次filterRDD的计算,因为count操作只是执行了tmpRDD之前的操作,下次用到filterRDD时,需要计算filterRDD的结果

        C。每执行一次action操作,都会重新计算一遍,除非使用persisit方法

 

        val tmpRDD = Analysis.analyserZip(tdidRDD, filterRDD).persist()

        filterRDD.count()

        filterRDD.unpersist()

        filterRDD = tmpRDD

        *** 使用这种方式,不会导致shuffle增长 ***

 

5、join操作、co-partitioin

        A。按相同的paritioner分区后,如果有mapPartitionsWithIndex、mapPartitions 操作,

                需要设置preservesPartitioning这个参数,

                默认值:preservesPartitioning: Boolean = false

                override val partitioner = if (preservesPartitioning) firstParent[T].partitioner else None

        B。按相同的paritioner分区后,如果map、flatMap、filter 操作,partitioner被置为None

        C。streaming实例验证:

                1、当为no-partitioner时,shuffle每批数据都会增长,并且,序列化和反序列化后,shuffle并不下降,

                      即使重启系统后,重新读区序列化后的数据,会在上次shuffleWrite的基础上,

                  增加shufflefilter RDD shuffleWrite + tdidRDD shuffleWrite全shuffle

                  结论:BloomFilter的shuffleWrite和shuffleRead的增长,其大小跟put的数据有关,put的数据越大(重复率越小),增量越大

                2、当为no-partitioner时,其中一个RDD有partitioner,另一个partitioner为None,shuffle的量为partitioner为None的RDD的数据量

                3、当为co-partitioner时,partition相同,shuffle每批增长的量为上批tdidRDD的数据量,当序列化反序列后,shuffle下降为20M左右

                4、当为co-partitioner时,partition不同,shuffle的量为partition少的RDDshuffle

        D。单个join,a.join(b),shuffle跟a、b在join左右两侧的位置无关,a.join(b).join(c),大表在前面,shuffel最小

        E。zipPartition,即使没有相同的partitioner,也没有shuffle,计算的时候是多个RDD的这个partition算完,再算下一个partition

6、localcheckpoint

        A。会打断RDD的依赖关系,并且把当前的RDD保存为一个新的RDD

        B。当有块丢失时,会导致系统异常,退出

 

在dataworks上执行scala编写的spark任务,获取hologres报错如下 2025-08-04 08:55:31,896 ERROR org.apache.spark.deploy.yarn.ApplicationMaster - User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 25.0 failed 4 times, most recent failure: Lost task 0.3 in stage 25.0 (TID 2364, 3010b5205.cloud.b7.am301, executor 11): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Container marked as failed: container_1754267297905_2080247398_01_000011 on host: 3010b5205.cloud.b7.am301. Exit status: 137. Diagnostics: 3010b5205.cloud.b7.am301 Driver stacktrace: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 25.0 failed 4 times, most recent failure: Lost task 0.3 in stage 25.0 (TID 2364, 3010b5205.cloud.b7.am301, executor 11): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Container marked as failed: container_1754267297905_2080247398_01_000011 on host: 3010b5205.cloud.b7.am301. Exit status: 137. Diagnostics: 3010b5205.cloud.b7.am301 Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) ~[scala-library-2.11.8.jar:?] at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) ~[scala-library-2.11.8.jar:?] at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at scala.Option.foreach(Option.scala:257) ~[scala-library-2.11.8.jar:?] at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:368) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3272) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset.head(Dataset.scala:2484) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset.take(Dataset.scala:2698) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset.showString(Dataset.scala:254) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset.show(Dataset.scala:725) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset.show(Dataset.scala:702) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at com.bmsoft.operate.VsZConductorOverlimiteventPeriod.ConductorconductorOverlimiteventInfo(VsZConductorOverlimiteventPeriod.scala:181) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.operate.VsZConductorOverlimiteventPeriod.conductorOverlimiteventInfo(VsZConductorOverlimiteventPeriod.scala:46) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$.func(VsZConductorOverlimiteventPeriodTask.scala:29) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$$anonfun$main$1.apply(VsZConductorOverlimiteventPeriodTask.scala:33) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$$anonfun$main$1.apply(VsZConductorOverlimiteventPeriodTask.scala:33) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.scala.utils.LeoUtils.package$$anonfun$taskEntry_odps$1.apply$mcVJ$sp(LeoUtils.scala:453) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.scala.utils.LeoUtils.package$$anonfun$taskEntry_odps$1.apply(LeoUtils.scala:431) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.scala.utils.LeoUtils.package$$anonfun$taskEntry_odps$1.apply(LeoUtils.scala:431) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at scala.collection.immutable.NumericRange.foreach(NumericRange.scala:73) ~[scala-library-2.11.8.jar:?] at com.bmsoft.scala.utils.LeoUtils.package$.taskEntry_odps(LeoUtils.scala:431) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$.main(VsZConductorOverlimiteventPeriodTask.scala:33) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask.main(VsZConductorOverlimiteventPeriodTask.scala) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_65-AliJVM] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_65-AliJVM] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_65-AliJVM] at java.lang.reflect.Method.invoke(Method.java:497) ~[?:1.8.0_65-AliJVM] at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:708) [spark-yarn_2.11-2.3.0-odps0.30.0.jar:?] 2025-08-04 08:55:31,908 INFO org.apache.spark.deploy.yarn.ApplicationMaster - Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 25.0 failed 4 times, most recent failure: Lost task 0.3 in stage 25.0 (TID 2364, 3010b5205.cloud.b7.am301, executor 11): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Container marked as failed: container_1754267297905_2080247398_01_000011 on host: 3010b5205.cloud.b7.am301. Exit status: 137. Diagnostics: 3010b5205.cloud.b7.am301 Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:368) at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38) at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3272) at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484) at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252) at org.apache.spark.sql.Dataset.head(Dataset.scala:2484) at org.apache.spark.sql.Dataset.take(Dataset.scala:2698) at org.apache.spark.sql.Dataset.showString(Dataset.scala:254) at org.apache.spark.sql.Dataset.show(Dataset.scala:725) at org.apache.spark.sql.Dataset.show(Dataset.scala:702) at com.bmsoft.operate.VsZConductorOverlimiteventPeriod.ConductorconductorOverlimiteventInfo(VsZConductorOverlimiteventPeriod.scala:181) at com.bmsoft.operate.VsZConductorOverlimiteventPeriod.conductorOverlimiteventInfo(VsZConductorOverlimiteventPeriod.scala:46) at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$.func(VsZConductorOverlimiteventPeriodTask.scala:29) at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$$anonfun$main$1.apply(VsZConductorOverlimiteventPeriodTask.scala:33) at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$$anonfun$main$1.apply(VsZConductorOverlimiteventPeriodTask.scala:33) at com.bmsoft.scala.utils.LeoUtils.package$$anonfun$taskEntry_odps$1.apply$mcVJ$sp(LeoUtils.scala:453) at com.bmsoft.scala.utils.LeoUtils.package$$anonfun$taskEntry_odps$1.apply(LeoUtils.scala:431) at com.bmsoft.scala.utils.LeoUtils.package$$anonfun$taskEntry_odps$1.apply(LeoUtils.scala:431) at scala.collection.immutable.NumericRange.foreach(NumericRange.scala:73) at com.bmsoft.scala.utils.LeoUtils.package$.taskEntry_odps(LeoUtils.scala:431) at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$.main(VsZConductorOverlimiteventPeriodTask.scala:33) at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask.main(VsZConductorOverlimiteventPeriodTask.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:708) )
最新发布
08-05
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值