1、出现错误的操作
在列转行且用指定的列的值填充时报错,且列转行的字段个数超过10000个;
2、具体错误
Exception in thread "main" org.apache.spark.sql.AnalysisException: The pivot column field_name has more than 10000 distinct values, this cou
ld indicate an error. If this was intended, set spark.sql.pivotMaxValues to at least the number of distinct values of the pivot column.;
at org.apache.spark.sql.RelationalGroupedDataset.pivot(RelationalGroupedDataset.scala:327)
at com.rong360.featureAnalyse.FeatherAnalyseOnlineStep.getPivotData(FeatherAnalyseOnlineStep.java:564)
at com.rong360.featureAnalyse.FeatherAnalyseOnlineStep.main(FeatherAnalyseOnlineStep.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
3、解决方法
SparkSession spark = SparkSession.builder()
.config( "spark.driver.maxResultSize","40g")
.config("spark.sql.pivotMaxValues", 20000)
.appName("Features analyse step")
.enableHiveSupport()
.getOrCreate();当然也可以在配置文件中加;
Spark列转行报错解决

本文介绍在使用Apache Spark进行列转行操作时遇到的一个常见错误:当转行字段超过10000个时发生的AnalysisException。文章详细解释了错误原因,并提供了解决方案,包括调整Spark配置参数spark.sql.pivotMaxValues。

455

被折叠的 条评论
为什么被折叠?



