Incompatible namespaceIDs

本文介绍了解决Hadoop集群中DataNode节点初始化时出现的namespace ID不匹配错误的方法。提供了两种解决方案,包括从头开始重建集群和更新问题DataNode的namespace ID。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

在启动hadoop的时候使用jps查看datanode节点上的服务并没有起来,此时去查看日志发现:
2012-06-07 09:41:37,812 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2012-06-07 09:41:37,850 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2012-06-07 09:41:37,852 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2012-06-07 09:41:37,852 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2012-06-07 09:41:38,080 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2012-06-07 09:41:39,556 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 0 time(s).
2012-06-07 09:41:40,559 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 1 time(s).
2012-06-07 09:41:41,562 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 2 time(s).
2012-06-07 09:41:42,565 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 3 time(s).
2012-06-07 09:41:43,568 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 4 time(s).
2012-06-07 09:41:46,184 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/cheng/hadoop/data/data1: namenode namespaceID = 875533609; datanode namespaceID = 1665404807


解决:
下面给出两种解决办法,我使用的是第二种。
Workaround 1: Start from scratch
I can testify that the following steps solve this error, but the side effects won't make you happy (me neither). The crude workaround I have found is to:
1.     stop the cluster
2.     delete the data directory on the problematic datanode: the directory is specified by dfs.data.dir in conf/hdfs-site.xml; if you followed this tutorial, the relevant directory is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data
3.     reformat the namenode (NOTE: all HDFS data is lost during this process!)
4.     restart the cluster
When deleting all the HDFS data and starting from scratch does not sound like a good idea (it might be ok during the initial setup/testing), you might give the second approach a try.
Workaround 2: Updating namespaceID of problematic datanodes
Big thanks to Jared Stehler for the following suggestion. I have not tested it myself yet, but feel free to try it out and send me your feedback. This workaround is "minimally invasive" as you only have to edit one file on the problematic datanodes:
1.     stop the datanode
2.     edit the value of namespaceID in <dfs.data.dir>/current/VERSION to match the value of the current namenode
3.     restart the datanode
If you followed the instructions in my tutorials, the full path of the relevant file is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data/current/VERSION (background: dfs.data.dir is by default set to ${hadoop.tmp.dir}/dfs/data, and we set hadoop.tmp.dir to /usr/local/hadoop-datastore/hadoop-hadoop).
If you wonder how the contents of VERSION look like, here's one of mine:
#contents of <dfs.data.dir>/current/VERSION
namespaceID=393514426
storageID=DS-1706792599-10.10.10.1-50010-1204306713481
cTime=1215607609074
storageType=DATA_NODE
layoutVersion=-13
 
原因:每次namenode format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode format清空了namenode下的数据,但是没有晴空datanode下的数据,导致启动时失败,所要做的就是每次fotmat前,清空tmp一下 的所有目录.
 
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值