任务一
编写Scala代码,使用Spark将MySQL的shtd_store库中表user_info、sku_info、base_province、base_region、order_info、order_detail的数据增量抽取到Hive的ods库中对应表user_info、sku_info、base_province、base_region、order_info、order_detail中。(若ods库中部分表没有数据,正常抽取即可)
第二题:
抽取shtd_store库中sku_info的增量数据进入Hive的ods库中表sku_info。根据ods.sku_info表中create_time作为增量字段,只将新增的数据抽入,字段名称、类型不变,同时添加静态分区,分区字段为etl_date,类型为String,且值为当前比赛日的前一天日期(分区字段格式为yyyyMMdd)。使用hive cli执行show partitions ods.sku_info命令,将结果截图粘贴至客户端桌面【Release\任务B提交结果.docx】中对应的任务序号下;
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.lit
object Task1_2{
def main(args: Array[String]): Unit= {
val conf = new SparkConf().setAppName("Tack1_2").setMaster("local[*]")
val spark = SparkSession.builder().config(conf).enableHiveSupport().getOrCreate()
spark.sparkContext.setLogLevel("OFF")
spark.conf.set("hive.exec.dynamic.partition.mode","nonstrict")
val mysqldf = spark.read.format("jdbc")
.option("driver","com.mysql.jdbc.Driver")
.option("url","jdbc:mysql://master:3306/shtd_store?useSSL=false")
.option("user","root")
.option("password","123456")
.option("dbtable","sku_info").load()
spark.sql("select * from ods.sku_info").createOrReplaceTempView("ods")
mysqldf.createOrReplaceTempView("mysql")
val df1 = spark.sql(
"""
|select * from (
|select
|b.*
|from
|ods a , mysql b
|where
|a.id = b.id
|and
|b.create_time > a.create_time
|union all
|select * from
|mysql
|where
|mysql.id not in (select id from ods)
|)
|order by id desc
|""".stripMargin)
val df2 = df1.withColumn("etl_date",lit("20231220"))
df2.show()
df2.write.format("hive").mode("append").partitionBy("etl_date").saveAsTable("ods.sku_info")
spark.sql("show partitions ods.sku_info").show
}
}
以上代码如有错误,请各位大佬指正