文件导入到hive的表的几种方式
1、普通的hive表
1)本地文件加载到hive
load data local inpath ‘/home/xuyou/sqoop/imp_bbs_product_sannpy_’ into table default.hive_bbs_product_snappy ;
2)把hdfs上文件映射成hive 表,前提是创建hive表的时候指定分割符和文件中的分割符一致。
load data inpath ‘/user/xuyou/sqoop/imp_bbs_product_sannpy_’ into table default.hive_bbs_product_snappy ;
2、分区表
alter table 库名.表名 add if not exists partition (src_file_day=‘jsonstartdate′)location′/user/hadoop/ods/migu/kesheng/keshengevent/json_startdate') location '/user/hadoop/ods/migu/kesheng/kesheng_event/jsonstartdate′)location′/user/hadoop/ods/migu/kesheng/keshengevent/json_startdate’
3、外部表删除操作
背景:hive的外部表被删除
1、如果开启了回收站机制,那么再hdfs的文件夹.Trash中找到对应文件,重新load到对应的表的
2、执行特殊命令
1)首先你要创建你之前删除的外部表,
2)执行MSCK REPAIR TABLE 表名;
背景:
hive外部表的表分区数据被重写
ovewrite的意思是先删除后写入,数据应该会到.Trash里面,但是事实是overwrite操作是真正意思上的删除。
1、准备一个正常的hive表
select * from temp.tmp_hel_test_user_ex;
+--------------------------+----------------------------+---------------------------+---------------------------+--+
| tmp_hel_test_user_ex.id | tmp_hel_test_user_ex.name | tmp_hel_test_user_ex.age | tmp_hel_test_user_ex.tel |
+--------------------------+----------------------------+---------------------------+---------------------------+--+
| 1 | fz | 25 | 13188888888888 |
| 2 | test | 20 | 13222222222 |
| 3 | dx | 24 | 183938384983 |
| 4 | test1 | 22 | 1111111111 |
| 5 | fwf | 24 | 45155555 |
+--------------------------+----------------------------+---------------------------+---------------------------+--+
2、准备一个外部分区表
select * from temp.tmp_hel_test_user2_ex where dt=20200325;
---------------------------+-----------------------------+----------------------------+----------------------------+---------------------------+--+
| tmp_hel_test_user2_ex.id | tmp_hel_test_user2_ex.name | tmp_hel_test_user2_ex.age | tmp_hel_test_user2_ex.tel | tmp_hel_test_user2_ex.dt |
+---------------------------+-----------------------------+----------------------------+----------------------------+---------------------------+--+
| 1 | fz | 25 | 13188888888888 | 20200325 |
| 2 | test | 20 | 13222222222 | 20200325 |
| 3 | dx | 24 | 183938384983 | 20200325 |
| 4 | test1 | 22 | 1111111111 | 20200325 |
+---------------------------+-----------------------------+----------------------------+----------------------------+---------------------------+--+
3、覆盖一个外部的分区
insert overwrite table temp.test_user2_ex partition(dt=20200325) select * from temp.tmp_hel_test_user_ex;
+---------------------------+-----------------------------+----------------------------+----------------------------+---------------------------+--+
| tmp_hel_test_user2_ex.id | tmp_hel_test_user2_ex.name | tmp_hel_test_user2_ex.age | tmp_hel_test_user2_ex.tel | tmp_hel_test_user2_ex.dt |
+---------------------------+-----------------------------+----------------------------+----------------------------+---------------------------+--+
| 1 | fz | 25 | 13188888888888 | 20200325 |
| 2 | test | 20 | 13222222222 | 20200325 |
| 3 | dx | 24 | 183938384983 | 20200325 |
| 4 | test1 | 22 | 1111111111 | 20200325 |
| 5 | fwf | 24 | 45155555 | 20200325 |
+---------------------------+-----------------------------+----------------------------+----------------------------+---------------------------+--+
5 rows selected (0.251 seconds)
4、查询回收站数据,并没有删除的外部表的相关信息,事实是overwrite的时候是真正意思上的删除。
hello@dds-db-047:/mnt/ad06/dev/temp/zltest> hdfs dfs -ls hdfs://ns1/user/hello/.Trash/Current/user/hive/warehouse/temp.db/tmp_hel_test_user2_ex/
ls: `hdfs://ns1/user/hello/.Trash/Current/user/hive/warehouse/temp.db/tmp_hel_test_user2_ex/': No such file or directory
900

被折叠的 条评论
为什么被折叠?



