数据湖Hudi-6-Hudi集成Spark-DeltaStreamer
DeltaStreamer导入工具
HoodieDeltaStreamer工具 (hudi-utilities-bundle中的一部分) 提供了从DFS或Kafka等不同来源进行摄取的方式,并具有以下功能:
- 精准一次从Kafka采集新数据,从Sqoop、HiveIncrementalPuller的输出或DFS文件夹下的文件增量导入。
- 导入的数据支持json、avro或自定义数据类型。
- 管理检查点,回滚和恢复。
- 利用 DFS 或 Confluent schema registry的 Avro Schema。
- 支持自定义转换操作。
1 命令说明
执行如下命令,查看帮助文档:
spark-submit --class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer /opt/software/hudi-0.12.0/packaging/hudi-utilities-bundle/target/hudi-utilities-bundle_2.12-0.12.0.jar --help
Schema Provider和Source配置项:https://hudi.apache.org/docs/hoodie_deltastreamer
下面以File Based Schema Provider和JsonKafkaSource为例:
2 准备Kafka数据
(1)启动kafka集群,创建测试用的topic
bin/kafka-topics.sh --bootstrap-server hadoop102:9092 --create --topic hudi_test
(2)准备java生产者代码往topic发送测试数据
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.4.1</version>
</dependency>
<!--fastjson <= 1.2.80 存在安全漏洞,-->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.83</version>
</dependency>
import com.alibaba.fastjson.JSONObject;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.Properties;
import java.util.Random;
public class TestKafkaProducer {
public static void main(String[] args) {
Properties props = new Properties();
props.put("bootstrap.servers", "hadoop102:9092,hadoop103:9092,hadoop104:9092");
props.put("acks", "-1");
props.put("batch.size", "1048576");
props.put("linger.ms", "5");
props.put("compression.type", "snappy");
props.put("buffer.memory", "33554432");
props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);
Random random = new Random();
for (int i = 0; i < 1000; i++) {
JSONObject model = new JSONObject();
model.put("userid", i);
model.put("username", "name" + i);
model.put("age", 18);
model.put("partition", random.nextInt(100));
producer.send(new ProducerRecord<String, String>("hudi_test", model.toJSONString()));
}
producer.flush();
producer.close();
}
}
3 准备配置文件
(1)定义arvo所需schema文件(包括source和target)
mkdir /opt/module/hudi-props/
vim /opt/module/hudi-props/source-schema-json.avsc
{
"type": "record",
"name": "Profiles",
"fields": [
{
"name": "userid",
"type": [ "null", "string" ],
"default": null
},
{
"name": "username",
"type": [ "null", "string" ],
"default": null
},
{
"name": "age",
"type": [ "null", "string" ],
"default": null
},
{
"name": "partition",
"type": [ "null", "string" ],
"default": null
}
]
}
cp source-schema-json.avsc target-schema-json.avsc
(2)拷贝hudi配置base.properties
cp /opt/software/hudi-0.12.0/hudi-utilities/src/test/resources/delta-streamer-config/base.properties /opt/module/hudi-props/
(3)根据源码里提供的模板,编写自己的kafka source的配置文件
cp /opt/software/hudi-0.12.0/hudi-utilities/src/test/resources/delta-streamer-config/kafka-source.properties /opt/module/hudi-props/
vim /opt/module/hudi-props/kafka-source.properties
include=hdfs://hadoop102:8020/hudi-props/base.properties
# Key fields, for kafka example
hoodie.datasource.write.recordkey.field=userid
hoodie.datasource.write.partitionpath.field=partition
# schema provider configs
hoodie.deltastreamer.schemaprovider.source.schema.file=hdfs://hadoop102:8020/hudi-props/source-schema-json.avsc
hoodie.deltastreamer.schemaprovider.target.schema.file=hdfs://hadoop102:8020/hudi-props/target-schema-json.avsc
# Kafka Source
hoodie.deltastreamer.source.kafka.topic=hudi_test
#Kafka props
bootstrap.servers=hadoop102:9092,hadoop104:9092,hadoop103:9092
auto.offset.reset=earliest
group.id=test-group
(4)将配置文件上传到hdfs
hadoop fs -put /opt/module/hudi-props/ /
4 拷贝所需jar包到Spark
cp /opt/software/hudi-0.12.0/packaging/hudi-utilities-bundle/target/hudi-utilities-bundle_2.12-0.12.0.jar /opt/module/spark-3.2.2/jars/
需要把hudi-utilities-bundle_2.12-0.12.0.jar放入spark的jars路径下,否则报错找不到一些类和方法。
5 运行导入命令
spark-submit \
--class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer \
/opt/module/spark-3.2.2/jars/hudi-utilities-bundle_2.12-0.12.0.jar \
--props hdfs://hadoop102:8020/hudi-props/kafka-source.properties \
--schemaprovider-class org.apache.hudi.utilities.schema.FilebasedSchemaProvider \
--source-class org.apache.hudi.utilities.sources.JsonKafkaSource \
--source-ordering-field userid \
--target-base-path hdfs://hadoop102:8020/tmp/hudi/hudi_test \
--target-table hudi_test \
--op BULK_INSERT \
--table-type MERGE_ON_READ
6 查看导入结果
(1)启动spark-sql
spark-sql \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
--conf 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog' \
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'
(2)指定location创建hudi表
use spark_hudi;
create table hudi_test using hudi location 'hdfs://hadoop1:8020/tmp/hudi/hudi_test'
(3)查询hudi表
select * from hudi_test;
7.总结
- 1.DeltaStreamer是hudi提供的与spark有所整合,对外提供的,可以简单配置一下配置文件就可以处理数据。
- 2.配置文件包含三方面:
- 1.字段描述信息相关配置文件
- source-schema-json.avsc
- target-schema-json.avsc
以上两个文件如果源端数据字段和目标端数据字段一直,那么这两个文件保持一致即可。
- 2.hudi基础配置信息的配置文件
- base.properties
可以配置一些hudi的基础参数
- base.properties
- 3.源端数据对应的配置文件,当前例子使用的是kafka,那就需要配置kafka相关的信息
- kafka-source.properties
- 1.字段描述信息相关配置文件
- 3.配置文件统一上传到hdfs中,需要注意在源端数据配置文件中,要对齐目录。