问题一:
2010-10-18 01:18:45,050 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
Incompatible namespaceIDs in /usr/local/hadoop/tmp/dfs/data: namenode namespaceID = 1501733340; datanode namespaceID = 1262603975
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
解决方案:
1、停止集群
$./stop-all.sh
2、删除在hdfs中配置的data目录下面的所有数据;
$ rm -rf /usr/local/hadoop/filesystem/data/
3、重新格式化namenode
$ ./hadoop namenode -format
4、重新启动hadoop集群
$./start-all.sh
问题二:
WARNING : There are about 1 missing blocks. Please check the log or run fsck.
解决方案:
$ bin/hadoop fsck /
/home/zhaozheng/hdfs/README.txt: CORRUPT block blk_4085337189286784361
/home/zhaozheng/hdfs/README.txt: MISSING 1 blocks of total size 1366 B.Status: CORRUPT
Total size: 1366 B
Total dirs: 0
Total files: 1
Total blocks (validated): 1 (avg. block size 1366 B)
********************************
CORRUPT FILES: 1
MISSING BLOCKS: 1
MISSING SIZE: 1366 B
CORRUPT BLOCKS: 1
********************************
Minimally replicated blocks: 0 (0.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 0.0
Corrupt blocks: 1
Missing replicas: 0
Number of data-nodes: 2
Number of racks: 1
$bin/hadoop dfs -rm /home/zhaozheng/hdfs/README.txt
$bin/hadoop fsck /
.Status: HEALTHY
Total size: 4 B
Total dirs: 12
Total files: 1
Total blocks (validated): 1 (avg. block size 4 B)
Minimally replicated blocks: 1 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 2.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 2
Number of racks: 1
问题三:
org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /usr/local/hadoop/tmp/mapred/system. Name node is in safe mode.
解决方案:
$bin/hadoop dfsadmin -safemode leave #关闭safe mode
Safe mode is OFF
Hadoop常见问题及解决方案
最新推荐文章于 2023-03-17 13:53:43 发布
本文介绍了Hadoop集群中常见的三个问题及其解决方案,包括名称空间ID不一致导致的启动失败、文件系统损坏导致的数据块丢失及名称节点处于安全模式下无法删除文件等问题。
2555

被折叠的 条评论
为什么被折叠?



