set hive.exec.parallel

本文通过一个具体的 SQL 查询案例介绍了 Hive 中 hive.exec.parallel 参数的作用。该参数控制着 SQL 执行过程中不同任务间的并行度。实测显示,在资源充足的情况下启用此参数能有效提升查询效率。

hive.exec.parallel参数控制在同一个sql中的不同的job是否可以同时运行,默认为false.
下面是对于该参数的测试过程:

测试sql:
select r1.a
    from

        (select t.a from sunwg_10 t join sunwg_10000000 s on t.a=s.b) r1

        join

        (select s.b from sunwg_100000 t join sunwg_10 s on t.a=s.b) r2

        on (r1.a=r2.b);

1
Set hive.exec.parallel=false;
当参数为false的时候,三个job是顺序的执行

2
但是可以看出来其实两个子查询中的sql并无关系,可以并行的跑

 

总结:
在资源充足的时候hive.exec.parallel会让那些存在并发job的sql运行得更快,但同时消耗更多的资源。

SET hive.exec.dynamic.partition=true; hive> SET hive.exec.dynamic.partition.mode=nonstrict; hive> SET mapreduce.job.reduces=3; hive> SET hive.exec.reducers.bytes.per.reducer=256000000; hive> SET hive.exec.reducers.max=5; hive> SET hive.optimize.ppd=true; hive> SET hive.optimize.index.filter=true; hive> SET hive.exec.parallel=true; hive> SET hive.exec.parallel.thread.number=4; hive> INSERT OVERWRITE TABLE ads_brand_stats > SELECT > brand, > order_cnt, > total_amount, > avg_amount, > ROW_NUMBER() OVER (ORDER BY total_amount DESC) AS brand_rank, > ROUND(total_amount * 100.0 / total_sum, 2) AS market_share > FROM ( > SELECT > brand, > SUM(order_cnt) AS order_cnt, > ROUND(SUM(total_amount), 2) AS total_amount, > ROUND(AVG(avg_amount), 2) AS avg_amount, > SUM(SUM(total_amount)) OVER () AS total_sum > FROM dws_brand_day > WHERE dt BETWEEN '2025-10-11' AND '2025-11-11' > AND brand IS NOT NULL > GROUP BY brand > ) t; Query ID = root_20251031154136_a384577e-cb52-411f-aefc-b453e8e4c777 Total jobs = 4 Launching Job 1 out of 4 Number of reduce tasks not specified. Defaulting to jobconf value of: 3 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1761893573857_0015, Tracking URL = http://server01:8088/proxy/application_1761893573857_0015/ Kill Command = /opt/server/hadoop-3.2.2/bin/mapred job -kill job_1761893573857_0015
最新发布
11-01
SET hive.exec.parallel=true; hive> SET hive.vectorized.execution.enabled=true; hive> SET hive.vectorized.execution.reduce.enabled=true; hive> SET hive.exec.compress.intermediate=true; hive> SET mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec; hive> SET mapreduce.job.reduces=20; hive> SET hive.exec.reducers.bytes.per.reducer=500000000; hive> CREATE TABLE IF NOT EXISTS temp_brand_agg ( > brand STRING, > order_cnt BIGINT, > total_amount DECIMAL(18,2), > avg_amount DECIMAL(18,2) > ) STORED AS ORC; OK Time taken: 1.142 seconds hive> INSERT OVERWRITE TABLE temp_brand_agg > SELECT > brand, > SUM(order_cnt) AS order_cnt, > ROUND(SUM(total_amount), 2) AS total_amount, > ROUND(AVG(avg_amount), 2) AS avg_amount > FROM dws_brand_day > WHERE dt BETWEEN '2025-10-11' AND '2025-11-11' > AND brand IS NOT NULL > GROUP BY brand; Query ID = root_20251031164049_8374bb99-bcca-4223-89ea-5d44d63a20d7 Total jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Defaulting to jobconf value of: 20 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1761898939137_0003, Tracking URL = http://server01:8088/proxy/application_1761898939137_0003/ Kill Command = /opt/server/hadoop-3.2.2/bin/mapred job -kill job_1761898939137_0003 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 20 2025-10-31 16:41:00,581 Stage-1 map = 0%, reduce = 0% 2025-10-31 16:42:00,619 Stage-1 map = 0%, reduce = 0%
11-01
SET hive.exec.parallel=true; hive> SET hive.auto.convert.join=true; hive> SET hive.optimize.correlation=true; hive> SET hive.exec.reducers.bytes.per.reducer=1073741824; hive> SET hive.exec.reducers.max=999; hive> SET mapreduce.job.reduces=5; hive> SET hive.exec.compress.intermediate=true; hive> SET hive.exec.compress.output=true; hive> SET mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec; hive> > INSERT OVERWRITE TABLE ads_brand_stats > SELECT > brand, > order_cnt, > total_amount, > avg_amount, > brand_rank, > ROUND(total_amount * 100.0 / market_total, 2) AS market_share > FROM ( > SELECT > brand, > SUM(order_cnt) AS order_cnt, > ROUND(SUM(total_amount), 2) AS total_amount, > ROUND(AVG(avg_amount), 2) AS avg_amount, > ROW_NUMBER() OVER (ORDER BY SUM(total_amount) DESC) AS brand_rank, > SUM(SUM(total_amount)) OVER () AS market_total > FROM dws_brand_day > WHERE dt BETWEEN '2025-10-11' AND '2025-11-11' > AND brand IS NOT NULL > GROUP BY brand > ) t; Query ID = root_20251031163357_fdf7adca-8a9f-4db1-86c9-0be40b6a7c96 Total jobs = 4 Launching Job 1 out of 4 Number of reduce tasks not specified. Defaulting to jobconf value of: 5 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1761898939137_0002, Tracking URL = http://server01:8088/proxy/application_1761898939137_0002/ Kill Command = /opt/server/hadoop-3.2.2/bin/mapred job -kill job_1761898939137_0002 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 5 2025-10-31 16:34:07,511 Stage-1 map = 0%, reduce = 0% 2025-10-31 16:34:17,861 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 13.0 sec 2025-10-31 16:34:21,985 Stage-1 map = 100%, reduce = 20%, Cumulative CPU 15.04 sec 2025-10-31 16:35:22,064 Stage-1 map = 100%, reduce = 20%, Cumulative CPU 15.04 sec
11-01
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值