hive-issue-inserting-records-to-partitioned-table

在从HDP2.5.6升级到HDP2.6.4后,Hive在INSERT OVERWRITE TABLE操作上变得更加严格,导致错误。解决方法是在运行INSERT前删除数据并丢弃分区,或者不删除数据/丢弃分区,让INSERT操作直接替换。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

hive-issue-inserting-records-to-partitioned-table

Hi Sam,

Recently we upgraded our cluster from HDP2.5.6 to HDP2.6.4 and I am getting the similar error. With the current version Hive is more stricter on INSERT OVERWRITE TABLE. What it means is you might be deleting the data prior to loading the table and not dropping the partition when you do INSERT OVERWRITE TABLE.

To get around it,

Try to delete the data and drop partition,prior to running the INSERT OVERWRITE TABLE.

OR don't delete the data/drop partition for the external table let the INSERT OVERWRITE TABLE replace it.

Regards

Khaja Hussain.

Similar Error:

Caused by: java.util.concurrent.ExecutionException: org.apache.hadoop.hive.ql.metadata.HiveException: Destination directory hdfs://data_dir/pk_business=bsc/pk_data_source=pos/pk_frequency=bnw/pk_data_state=c13251_ps2111_bre000_pfc00000_spr000_pfs00000/pk_reporttype=BNN/pk_ppweek=2487 has not be cleaned up.

转载于:https://www.cnblogs.com/suanec/p/10442433.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值