hive的hive.exec.parallel参数说明

本文详细介绍了Hive.exec.parallel参数如何控制SQL中的不同job并发执行,通过测试展示了参数设置对任务执行速度和资源消耗的影响,并探讨了其在实际刷新任务中的应用价值。
hive.exec.parallel参数控制在同一个sql中的不同的job是否可以同时运行,默认为false.

下面是对于该参数的测试过程:

测试sql:

select r1.a
from (
select t.a from sunwg_10 t join sunwg_10000000 s on t.a=s.b) r1
join
(select s.b from sunwg_100000 t join sunwg_10 s on t.a=s.b) r2
on (r1.a=r2.b);



1 当参数为false的时候,三个job是顺序的执行

set hive.exec.parallel=false;

2 但是可以看出来其实两个子查询中的sql并无关系,可以并行的跑

set hive.exec.parallel=true;



总结:
在资源充足的时候hive.exec.parallel会让那些存在并发job的sql运行得更快,但同时消耗更多的资源
可以评估下hive.exec.parallel对我们的刷新任务是否有帮助.

转自 http://www.oratea.net/?p=1377
SET hive.exec.dynamic.partition=true; hive> SET hive.exec.dynamic.partition.mode=nonstrict; hive> SET mapreduce.job.reduces=3; hive> SET hive.exec.reducers.bytes.per.reducer=256000000; hive> SET hive.exec.reducers.max=5; hive> SET hive.optimize.ppd=true; hive> SET hive.optimize.index.filter=true; hive> SET hive.exec.parallel=true; hive> SET hive.exec.parallel.thread.number=4; hive> INSERT OVERWRITE TABLE ads_brand_stats > SELECT > brand, > order_cnt, > total_amount, > avg_amount, > ROW_NUMBER() OVER (ORDER BY total_amount DESC) AS brand_rank, > ROUND(total_amount * 100.0 / total_sum, 2) AS market_share > FROM ( > SELECT > brand, > SUM(order_cnt) AS order_cnt, > ROUND(SUM(total_amount), 2) AS total_amount, > ROUND(AVG(avg_amount), 2) AS avg_amount, > SUM(SUM(total_amount)) OVER () AS total_sum > FROM dws_brand_day > WHERE dt BETWEEN '2025-10-11' AND '2025-11-11' > AND brand IS NOT NULL > GROUP BY brand > ) t; Query ID = root_20251031154136_a384577e-cb52-411f-aefc-b453e8e4c777 Total jobs = 4 Launching Job 1 out of 4 Number of reduce tasks not specified. Defaulting to jobconf value of: 3 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1761893573857_0015, Tracking URL = http://server01:8088/proxy/application_1761893573857_0015/ Kill Command = /opt/server/hadoop-3.2.2/bin/mapred job -kill job_1761893573857_0015
最新发布
11-01
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

手把手教你学AI

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值