2015-10-15 14:12:41,319 INFO datanode.DataNode (DataXceiverServer.java:<init>(75)) - Balancing bandwith is 6250000 bytes/s
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:969)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:940)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
at java.lang.Thread.run(Thread.java:745)
2015-10-15 14:09:09,793 WARN datanode.DataNode (BPServiceActor.java:run(836)) - Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to nn01/192.168.2.101:8020
2015-10-15 14:09:09,794 WARN datanode.DataNode (BPServiceActor.java:run(836)) - Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to nn02t/192.168.2.102:8020
2015-10-15 14:09:09,895 WARN datanode.DataNode (BPOfferService.java:getBlockPoolId(143)) - Block pool ID needed, but service not yet registered with NN
java.lang.Exception: trace
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:899)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
at java.lang.Thread.run(Thread.java:745)
2015-10-15 14:09:09,896 INFO datanode.DataNode (BlockPoolManager.java:remove(99)) - Removed Block pool <registering> (Datanode Uuid unassigned)
2015-10-15 14:09:09,896 WARN datanode.DataNode (BPOfferService.java:getBlockPoolId(143)) - Block pool ID needed, but service not yet registered with NN
java.lang.Exception: trace
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:901)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
at java.lang.Thread.run(Thread.java:745)
2015-10-15 14:09:11,897 WARN datanode.DataNode (DataNode.java:secureMain(2049)) - Exiting Datanode
2015-10-15 14:09:11,900 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 0
2015-10-15 14:09:11,903 INFO datanode.DataNode (StringUtils.java:run(640)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at dn06/192.168.2.110
解决方法:
mv /diska/hadoop/hdfs/data/current /diska/hadoop/hdfs/data/current.bak
再次启动datanode 正常启动了,
原因,我将之前的/diska/hadoop/hdfs/data文件拷贝到/diska/hadoop/hdfs/data中,因为是过时的文件NN找不到信息,所以导致datanode无法正常启动。
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:969)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:940)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
at java.lang.Thread.run(Thread.java:745)
2015-10-15 14:09:09,793 WARN datanode.DataNode (BPServiceActor.java:run(836)) - Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to nn01/192.168.2.101:8020
2015-10-15 14:09:09,794 WARN datanode.DataNode (BPServiceActor.java:run(836)) - Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to nn02t/192.168.2.102:8020
2015-10-15 14:09:09,895 WARN datanode.DataNode (BPOfferService.java:getBlockPoolId(143)) - Block pool ID needed, but service not yet registered with NN
java.lang.Exception: trace
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:899)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
at java.lang.Thread.run(Thread.java:745)
2015-10-15 14:09:09,896 INFO datanode.DataNode (BlockPoolManager.java:remove(99)) - Removed Block pool <registering> (Datanode Uuid unassigned)
2015-10-15 14:09:09,896 WARN datanode.DataNode (BPOfferService.java:getBlockPoolId(143)) - Block pool ID needed, but service not yet registered with NN
java.lang.Exception: trace
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:901)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
at java.lang.Thread.run(Thread.java:745)
2015-10-15 14:09:11,897 WARN datanode.DataNode (DataNode.java:secureMain(2049)) - Exiting Datanode
2015-10-15 14:09:11,900 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 0
2015-10-15 14:09:11,903 INFO datanode.DataNode (StringUtils.java:run(640)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at dn06/192.168.2.110
解决方法:
mv /diska/hadoop/hdfs/data/current /diska/hadoop/hdfs/data/current.bak
再次启动datanode 正常启动了,
原因,我将之前的/diska/hadoop/hdfs/data文件拷贝到/diska/hadoop/hdfs/data中,因为是过时的文件NN找不到信息,所以导致datanode无法正常启动。
本文讨论了Hadoop DataNode在启动过程中遇到的问题,包括带宽平衡、块池服务终止、块池ID获取异常等,并提供了有效的解决方法。通过复制过时文件到正确位置,确保了DataNode的正常启动。
2708

被折叠的 条评论
为什么被折叠?



