【排错】Failed to replace a bad datanode on the existing pipeline due to no more good ...

本文探讨了在Hadoop文件系统中遇到的写入错误,即无法替换现有管道中的坏数据节点的问题。通过分析错误日志,我们了解到在写操作时,系统无法找到可用的备用数据节点进行复制,导致写入失败。解决办法包括修改配置文件以更改数据节点替换策略,以避免此类问题。

错误:java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try.: ?: a/ d  V& o" Q  K
2014-05-07 12:21:41,820 WARN [Thread-115] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Graceful stop failed
7 V& J7 P7 t) y% Yorg.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[192.168.1.191:50010, 192.168.1.192:50010], original=[192.168.1.191:50010, 192.168.1.192:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
0 l: u) j: u, }# D; r        at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:514)
        at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.serviceStop(JobHistoryEventHandler.java:332)
        at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
6 t/ B7 R6 a8 I$ \1 Y        at org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
  I6 f) _- g$ y0 \: G) z        at org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
        at org.apache.hadoop.service.CompositeService.stop(CompositeService.java:159): \! V  U! K6 M4 S3 ~) 
        at org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:132)
% O. c$ ?/ x/ B4 V" y" L        at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
4 |$ `( S3 s3 y        at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.shutDownJob(MRAppMaster.java:548)
        at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler$1.run(MRAppMaster.java:599)
Caused by: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[192.168.1.191:50010, 192.168.1.192:50010], original=[192.168.1.191:50010, 192.168.1.192:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
5 x2 U! w+ R0 m        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:860)
& T7 m$ }/ C; x* k        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:925)
: a/ I# ?* Y' F7 M; j8 k6 T        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
9 a1 G7 [4 i+ i3 s
4 L" D( R5 Q; i0 V' @5 Y原因:无法写入;我的环境中有3个datanode,备份数量设置的是3。在写操作时,它会在pipeline中写3个机器。默认replace-datanode-on-failure.policy是DEFAULT,如果系统中的datanode大于等于3,它会找另外一个datanode来拷贝。目前机器只有3台,因此只要一台datanode出问题,就一直无法写入成功
解决办法:修改hdfs-site.xml文件,添加或者修改如下两项:

<property>

 <name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
; t+ J, w) V) j1 ?0 W  <value>true</value>
</property>
<property>
4 ~. B1 C* T9 j) J. {$ h; n  <name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
% M7 D. y5 K4 n) H$ g  <value>NEVER</value>
</property>
对于dfs.client.block.write.replace-datanode-on-failure.enable,客户端在写失败的时候,是否使用更换策略,默认是true没有问题。
4 ^% Y3 d5 F+ Q- z) [  x对于,dfs.client.block.write.replace-datanode-on-failure.policy,default在3个或以上备份的时候,是会尝试更换结点尝试写入datanode。而在两个备份的时候,不更换datanode,直接开始写。对于3个datanode的集群,只要一个节点没响应写入就会出问题,所以可以关掉。
$ [4 F- x) S6 n& I. o  ?" R

评论 1
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值