hive的hive.exec.parallel参数说明

Hive并行执行参数测试
本文通过一个具体的SQL查询案例介绍了Hive中hive.exec.parallel参数的作用。此参数用于控制同一SQL内不同任务是否能够并发执行,默认情况下为false,即任务会顺序执行。通过设置该参数为true,可以实现SQL子查询的并行处理,从而提高查询效率。

hive.exec.parallel参数控制在同一个sql中的不同的job是否可以同时运行,默认为false.

 

下面是对于该参数的测试过程:

 

测试sql:

select r1.a
from (
   select t.a from sunwg_10 t join sunwg_10000000 s on t.a=s.b) r1 
   join 
   (select s.b from sunwg_100000 t join sunwg_10 s on t.a=s.b) r2 
   on (r1.a=r2.b);

 

1 当参数为false的时候,三个job是顺序的执行

set hive.exec.parallel=false;

 

2 但是可以看出来其实两个子查询中的sql并无关系,可以并行的跑

set hive.exec.parallel=true;

 

 

SET hive.exec.dynamic.partition=true; hive> SET hive.exec.dynamic.partition.mode=nonstrict; hive> SET mapreduce.job.reduces=3; hive> SET hive.exec.reducers.bytes.per.reducer=256000000; hive> SET hive.exec.reducers.max=5; hive> SET hive.optimize.ppd=true; hive> SET hive.optimize.index.filter=true; hive> SET hive.exec.parallel=true; hive> SET hive.exec.parallel.thread.number=4; hive> INSERT OVERWRITE TABLE ads_brand_stats > SELECT > brand, > order_cnt, > total_amount, > avg_amount, > ROW_NUMBER() OVER (ORDER BY total_amount DESC) AS brand_rank, > ROUND(total_amount * 100.0 / total_sum, 2) AS market_share > FROM ( > SELECT > brand, > SUM(order_cnt) AS order_cnt, > ROUND(SUM(total_amount), 2) AS total_amount, > ROUND(AVG(avg_amount), 2) AS avg_amount, > SUM(SUM(total_amount)) OVER () AS total_sum > FROM dws_brand_day > WHERE dt BETWEEN '2025-10-11' AND '2025-11-11' > AND brand IS NOT NULL > GROUP BY brand > ) t; Query ID = root_20251031154136_a384577e-cb52-411f-aefc-b453e8e4c777 Total jobs = 4 Launching Job 1 out of 4 Number of reduce tasks not specified. Defaulting to jobconf value of: 3 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1761893573857_0015, Tracking URL = http://server01:8088/proxy/application_1761893573857_0015/ Kill Command = /opt/server/hadoop-3.2.2/bin/mapred job -kill job_1761893573857_0015
最新发布
11-01
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值