Spark sql on hive报如下的错误
ob failed with org.apache.spark.SparkException: File ./hiveforudf-0.0.5-SNAPSHOT-jar-with-dependencies.jar exists and does not match contents of .../hiveforudf-0.0.5-SNAPSHOT-jar-with-dependencies.jar FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.
发现where 条件一多,就报上面的错误,应该是内存不够导致的,故增加了内存的大小,然后就不报错了。
set spark.executor.memory=6g;
本文讨论了在使用SparkSQL on Hive进行大数据处理时遇到的内存不足问题。通过增加executor内存至6GB,成功解决了因where条件过多导致的运行时错误,确保了Spark任务的正常执行。
8637

被折叠的 条评论
为什么被折叠?



