[solved] INFO hdfs.DFSClient: Could not obtain block

本文介绍了解决Hadoop HDFS数据节点无法提供足够文件服务的问题。通过调整Hadoop配置文件hdfs-site.xml中的参数dfs.datanode.max.xcievers,可以增加数据节点能够同时服务的文件数量。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

问题:
10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block

解决:
An Hadoop HDFS datanode has an upper bound on the number of files that it will serve at any one time. The upper bound parameter is called xcievers (yes, this is misspelled). Again, before doing any loading, make sure you have configured Hadoop's conf/hdfs-site.xml setting the xceivers value to at least the following:

<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>

Be sure to restart your HDFS after making the above configuration.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值