org.apache.hadoop.hdfs.qjournal.client.QuorumException

本文记录了在启动Hadoop NameNode过程中遇到的初始化错误,具体表现为无法检查JournalNodes是否准备好进行格式化操作,提示连接被拒绝。问题源于hdfs-site.xml文件中关于共享编辑目录的配置不正确。
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
192.168.254.101:8485: Call From hadoop101/192.168.254.101 to hadoop101:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
        at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:900)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:184)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:987)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
16/01/18 23:43:42 INFO ipc.Client: Retrying connect to server: hadoop102/192.168.254.102:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/01/18 23:43:43 INFO ipc.Client: Retrying connect to server: hadoop103/192.168.254.103:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/01/18 23:43:43 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
192.168.254.101:8485: Call From hadoop101/192.168.254.101 to hadoop101:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
        at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:900)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:184)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:987)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
16/01/18 23:43:43 INFO util.ExitUtil: Exiting with status 1

16/01/18 23:43:43 INFO namenode.NameNode: SHUTDOWN_MSG: 



遇到以上问题,是由于hdfs-site.xml中  

  <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://hadoop104:8485;hadoop105:8485/cluster1</value>
  </property>

未配置正确

2025-05-29 23:48:20,479 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [192.168.43.101:8485, 192.168.43.102:8485, 192.168.43.103:8485], stream=null)) org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown: 192.168.43.102:8485: Journal Storage Directory root= /home/hadoop/hadoop-data/ha/dir-shared/mycluster; location= null not formatted ; journal id: mycluster at org.apache.hadoop.hdfs.qjournal.server.Journal.checkFormatted(Journal.java:532) at org.apache.hadoop.hdfs.qjournal.server.Journal.getLastPromisedEpoch(Journal.java:286) at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getJournalState(JournalNodeRpcServer.java:160) at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getJournalState(QJournalProtocolServerSideTranslatorPB.java:121) at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:28878) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943)
最新发布
05-31
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值