[Hadoop] “Too many fetch-failures” or “reducer stucks” issue

本文提供了解决Hadoop集群中Reducer任务挂起的问题解决方案,涉及检查Linux网络配置、Hadoop配置文件、主机名绑定及节点间通信等关键步骤。详细指导帮助您诊断并解决该常见问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

转自:http://lykke.iteye.com/blog/1405838


I post the solution here to help any ‘Hadoopers’ that have the same problem. This issue had been asked a lot on Hadoop mailing list but no answer was given so far. 
After installing Hadoop cluster and trying to run some jobs, you may see the Reducers stuck and TaskTracker log on one of the Worker node shows these messages : 

Java代码   收藏代码
  1. INFO org.apache.hadoop.mapred.TaskTracker: task_200801281756_0001_r_000000_0 0.2727273% reduce > copy (9 of  
  2. 11 at 0.00 MB/s) >  
  3. INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_0 0.2727273% reduce > copy (9 of  
  4. 11 at 0.00 MB/s) >  
  5. INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_0 0.2727273% reduce > copy (9 of  
  6. 11 at 0.00 MB/s) >  
  7. INFO org.apache.hadoop.mapred.JobInProgress: Too many fetch-failures for output of task: task_001_r_000000_0 … killing it  



The Reducer was failed to copy data through the HDFS, what we should do is to double check your Linux network and Hadoop configuration : 

1. Make sure that all the needed parameters are configured in hadoop-site.xml, and all the worker nodes should have the same content of this file. 
2. URI for TaskTracker and HDFS should use hostname instead of IP address. I saw some instances of Hadoop cluster using IP address for the URI, they can start all the services and execute the jobs, but the task never finished successfully. 
3. Check the file /etc/hosts on all the nodes and make sure that you’re binding the host name to its network IP, not the local one (127.0.0.1), don’t forget to check that all the nodes are able to communicate to the others using their hostname. 
Anyway, it doesn’t make sense to me when Hadoop always try to resolve an IP address using the hostname. I consider this is a bug of Hadoop and hope they will solve it in next stable version. 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值