背景: 在执行spark任务的时候,中间有多次落盘,将数据以parquet格式写到hdfs。然后再将数据读取出来继续执行。执行到中间有如下报错:
[spark] Caused by: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file hdfs://master1:8020/user/xxx/part-00512-0462dbf5-98b2-41fa-925c-3ab53f9f060b-c000.snappy.parquet
[spark] at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:223)
[spark] at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:215)
[spark] at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
[spark] at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:106)
[spark] at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
[spark] at org.apache.spark.sql.execution.datasources.FileScanR

最低0.47元/天 解锁文章

被折叠的 条评论
为什么被折叠?



