最近有几个处理大数据的计算,占用reduce为999,直接占满slot,导致其它任务无法够取资源。
现在尝试修改
<property>
<name>hive.exec.reducers.max</name>
<value>27</value>
<description>max number of reducers will be used. If the one
specified in the configuration parameter mapred.reduce.tasks is
negative, hive will use this one as the max number of reducers when
automatically determine number of reducers.
</description>
</property>
看看效果如何!