启动hadoop出现异常,查看namenode日志:
[color=red]java.io.IOException: File /tmp/hadoop-root/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1[/color]
网上查找原因,有这样的解决办法:
[quote]问题:I am trying to resolve an IOException error. I have a basic setup and shortly after running start-dfs.sh I get a: error: java.io.IOException: File /tmp/hadoop-root/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 java.io.IOException: File /tmp/hadoop-root/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 Any pointers how to resolve this? Thanks!
解答:You'll probably find that even though the name node starts, it doesn't have any data nodes and is completely empty. Whenever hadoop creates a new filesystem, it assigns a large random number to it to prevent you from mixing datanodes from different filesystems on accident. When you reformat the name node its FS has one ID, but your data nodes still have chunks of the old FS with a different ID and so will refuse to connect to the namenode. You need to make sure these are cleaned up before reformatting. You can do it just by deleting the data node directory, although there's probably a more "official" way to do it.[/quote]
这个原因是执行如下命令导致的:
[color=blue]$hadoop namenode -format[/color]
解决办法是删除掉data.dir目录,我查了一下,删除的是如下这个目录:
[color=blue]/tmp/hadoop-root/dfs/data[/color]
然后 reformat namenode,问题解决。
[color=red]java.io.IOException: File /tmp/hadoop-root/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1[/color]
网上查找原因,有这样的解决办法:
[quote]问题:I am trying to resolve an IOException error. I have a basic setup and shortly after running start-dfs.sh I get a: error: java.io.IOException: File /tmp/hadoop-root/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 java.io.IOException: File /tmp/hadoop-root/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 Any pointers how to resolve this? Thanks!
解答:You'll probably find that even though the name node starts, it doesn't have any data nodes and is completely empty. Whenever hadoop creates a new filesystem, it assigns a large random number to it to prevent you from mixing datanodes from different filesystems on accident. When you reformat the name node its FS has one ID, but your data nodes still have chunks of the old FS with a different ID and so will refuse to connect to the namenode. You need to make sure these are cleaned up before reformatting. You can do it just by deleting the data node directory, although there's probably a more "official" way to do it.[/quote]
这个原因是执行如下命令导致的:
[color=blue]$hadoop namenode -format[/color]
解决办法是删除掉data.dir目录,我查了一下,删除的是如下这个目录:
[color=blue]/tmp/hadoop-root/dfs/data[/color]
然后 reformat namenode,问题解决。
在执行hadoop namenode-format命令后,遇到文件复制错误导致的IOException问题。通过删除data.dir目录并重新格式化namenode,解决了无法复制文件到指定节点数的异常。
1218

被折叠的 条评论
为什么被折叠?



