hadoop集群配置datanode无法启动的原因

本文介绍了解决 Hadoop DataNode 启动失败并显示连接拒绝错误的问题,通过删除临时文件夹并重新格式化 NameNode,最终成功启动所有服务。同时提到关闭防火墙作为另一可能的解决方案。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

  2013-10-15 09:52:31,351 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sunliang/192.168.1.232:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2013-10-15 09:52:32,352 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sunliang/192.168.1.232:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2013-10-15 09:52:33,353 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sunliang/192.168.1.232:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2013-10-15 09:52:34,354 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sunliang/192.168.1.232:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2013-10-15 09:52:35,355 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sunliang/192.168.1.232:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2013-10-15 09:52:37,822 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sunliang/192.168.1.232:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2013-10-15 09:52:38,823 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sunliang/192.168.1.232:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2013-10-15 09:52:38,824 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.ConnectException: Call to sunliang/192.168.1.232:9000 failed on connection exception: java.net.ConnectException: Connection refused

        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1142)

        at org.apache.hadoop.ipc.Client.call(Client.java:1118)

        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)

        at com.sun.proxy.$Proxy5.sendHeartbeat(Unknown Source)

        at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1031)

        at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1588)

        at java.lang.Thread.run(Thread.java:662)

Caused by: java.net.ConnectException: Connection refused

        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)

        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)

        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)

        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)

        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:457)

        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:583)

        at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205)

        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249)

        at org.apache.hadoop.ipc.Client.call(Client.java:1093)

        ... 5 more

当启动hadoop时,用jps命令查看进程是datanode没有启动,而其他的都正常,查看日志显示如上的内容,

解决方案

     删除所用的tmp文件夹

然后执行hadoop namenode -format 进行格式化,在重新启动start-all.sh就都好了

还有个问题就是有防火墙,关闭防火墙

转载于:https://my.oschina.net/u/2450896/blog/1546379

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值