Flink On Hudi整个系列中可能遇到的问题

1、ERROR org.apache.hudi.sink.compact.CompactFunction                 [] - Executor executes action [Execute compaction for instant 20220331114224581 from task 0] error

ERROR org.apache.hudi.sink.compact.CompactFunction                 [] - Executor executes action [Execute compaction for instant 20220331114224581 from task 0] error
java.lang.NoClassDefFoundError: org/apache/hadoop/mapreduce/lib/input/FileInputFormat
  at org.apache.hudi.org.apache.parquet.HadoopReadOptions$Builder.<init>(HadoopReadOptions.java:95) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
  at org.apache.hudi.org.apache.parquet.HadoopReadOptions.builder(HadoopReadOptions.java:79) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
  at org.apache.hudi.org.apache.parquet.hadoop.ParquetReader$Builder.<init>(ParquetReader.java:198) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
  at org.apache.hudi.org.apache.parquet.avro.AvroParquetReader$Builder.<init>(AvroParquetReader.java:107) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
  at org.apache.hudi.org.apache.parquet.avro.AvroParquetReader$Builder.<init>(AvroParquetReader.java:99) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
  at org.apache.hudi.org.apache.parquet.avro.AvroParquetReader.builder(AvroParquetReader.java:48) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
  at org.apache.hudi.io.storage.HoodieParquetReader.getRecordIterator(HoodieParquetReader.java:65) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
  at org.apache.hudi.table.action.commit.FlinkMergeHelper.runMerge(FlinkMergeHelper.java:89) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
  at org.apache.hudi.table.HoodieFlinkCopyOnWriteTable.handleUpdateInternal(HoodieFlinkCopyOnWriteTable.java:368) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
  at org.apache.hudi.table.HoodieFlinkCopyOnWriteTable.handleUpdate(HoodieFlinkCopyOnWriteTable.java:359) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
  at org.apache.hudi.table.action.compact.HoodieCompactor.compact(HoodieCompactor.java:197) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
  at org.apache.hudi.sink.compact.CompactFunction.doCompaction(CompactFunction.java:104) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
  at org.apache.hudi.sink.compact.CompactFunction.lambda$processElement$0(CompactFunction.java:92) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
  at org.apache.hudi.sink.utils.NonThrownExecutor.lambda$execute$0(NonThrownExecutor.java:93) ~[hudi-flink-bundle_2.11-0.10.1.jar:0.10.1]
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_291]
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_291]
  at java.lang.Thread.run(Thread.java:748) [?:1.8.0

Apache Flink 和 Apache Hudi 都是 Apache 软件基金会的开源项目,它们都是处理大规模数据的工具。Apache Flink 是一个分布式流处理引擎,而 Apache Hudi 是一个分布式数据湖,可以实现数据仓库中数据的更新、删除和插入。 要集成 Apache Flink 和 Apache Hudi,可以按照以下步骤进行操作: 1.下载 Apache Flink 和 Apache Hudi,将它们解压到本地文件夹。 2.启动 Apache Flink 集群。可以使用以下命令启动: ``` ./bin/start-cluster.sh ``` 3.启动 Apache Hudi。可以使用以下命令启动: ``` ./bin/start-hoodie.sh ``` 4.在代码中使用 Apache Flink 和 Apache Hudi。可以使用以下代码示例: ```java import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.streaming.api.functions.source.SourceFunction; import org.apache.hudi.client.HoodieWriteClient; import org.apache.hudi.client.WriteStatus; import org.apache.hudi.client.common.HoodieFlinkEngineContext; import org.apache.hudi.client.common.HoodieSparkEngineContext; import org.apache.hudi.common.model.HoodieTableType; import org.apache.hudi.common.util.CommitUtils; import org.apache.hudi.common.util.ReflectionUtils; import org.apache.hudi.common.util.TypedProperties; import org.apache.hudi.common.util.ValidationUtils; import org.apache.hudi.flink.HoodieFlinkWriteConfiguration; import org.apache.hudi.flink.HoodieFlinkWriter; import org.apache.hudi.flink.HoodieFlinkWriterFactory; import org.apache.hudi.flink.source.StreamReadOperator; import org.apache.hudi.flink.utils.CollectSink; import org.apache.hudi.flink.utils.TestConfigurations; import org.apache.hudi.flink.utils.TestData; import org.apache.hudi.flink.utils.TestDataGenerator; import org.apache.hudi.streamer.FlinkStreamer; import org.apache.kafka.clients.consumer.ConsumerRecord; import java.util.List; import java.util.Properties; public class FlinkHudiIntegrationExample { public static void main(String[] args) throws Exception { // set up the streaming execution environment final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1); // create a Kafka source SourceFunction<ConsumerRecord<String, String>> kafkaSource = KafkaSource.<String, String>builder() .setBootstrapServers("localhost:9092") .setTopics("test_topic") .build(); // create a Hudi sink TypedProperties properties = new TypedProperties(); properties.setProperty("hoodie.datasource.write.recordkey.field", "id"); properties.setProperty("hoodie.datasource.write.partitionpath.field", "ts"); properties.setProperty("hoodie.table.name", "test_table"); properties.setProperty("hoodie.table.type", HoodieTableType.COPY_ON_WRITE.name()); properties.setProperty("hoodie.datasource.write.keygenerator.class", ReflectionUtils.loadClass( "org.apache.hudi.keygen.SimpleKeyGenerator").getName()); properties.setProperty("hoodie.datasource.write.payload.class", ReflectionUtils.loadClass( "org.apache.hudi.example.data.SimpleJsonPayload").getName()); properties.setProperty("hoodie.datasource.write.hive_style_partitioning", "true"); HoodieFlinkWriteConfiguration writeConfig = HoodieFlinkWriteConfiguration.newBuilder() .withProperties(properties) .build(); HoodieFlinkWriter<ConsumerRecord<String, String>> hudiSink = HoodieFlinkWriterFactory.<ConsumerRecord<String, String>>newInstance() .writeConfig(writeConfig) .withEngineContext(new HoodieFlinkEngineContext(env)) .build(); // add the Kafka source and Hudi sink to the pipeline env.addSource(kafkaSource) .map(new StreamReadOperator()) .addSink(hudiSink); // execute the pipeline env.execute("Flink Hudi Integration Example"); } } ``` 这个代码示例展示了如何在 Apache Flink 中使用 Apache Hudi。它使用 Kafka 作为数据源,将数据写入到 Hudi 表中。 以上就是集成 Apache Flink 和 Apache Hudi 的步骤。需要注意的是,集成过程中可能会遇到一些问题,需要根据具体情况进行解决。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值