格式化namenode时报连接错误

本文记录了一次HDFS格式化失败的经历及解决过程。在HA集群环境下,由于JournalNode未启动导致NameNode无法完成格式化操作。通过调整配置并确保JournalNode正常运行后,问题得到解决。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

************************************************************/
18/09/05 21:18:00 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
18/09/05 21:18:00 INFO namenode.NameNode: createNameNode [-format]
18/09/05 21:18:01 WARN common.Util: Path /root/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
18/09/05 21:18:01 WARN common.Util: Path /root/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-7625d3f3-06cd-41bd-9ef6-6e880b19aebb
18/09/05 21:18:02 INFO namenode.FSNamesystem: No KeyProvider found.
18/09/05 21:18:02 INFO namenode.FSNamesystem: fsLock is fair: true
18/09/05 21:18:02 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
18/09/05 21:18:02 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
18/09/05 21:18:02 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
18/09/05 21:18:02 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
18/09/05 21:18:02 INFO blockmanagement.BlockManager: The block deletion will start around 2018 九月 05 21:18:02
18/09/05 21:18:02 INFO util.GSet: Computing capacity for map BlocksMap
18/09/05 21:18:02 INFO util.GSet: VM type       = 64-bit
18/09/05 21:18:02 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
18/09/05 21:18:02 INFO util.GSet: capacity      = 2^21 = 2097152 entries
18/09/05 21:18:02 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
18/09/05 21:18:02 INFO blockmanagement.BlockManager: defaultReplication         = 3
18/09/05 21:18:02 INFO blockmanagement.BlockManager: maxReplication             = 512
18/09/05 21:18:02 INFO blockmanagement.BlockManager: minReplication             = 1
18/09/05 21:18:02 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
18/09/05 21:18:02 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
18/09/05 21:18:02 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
18/09/05 21:18:02 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
18/09/05 21:18:02 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
18/09/05 21:18:02 INFO namenode.FSNamesystem: supergroup          = supergroup
18/09/05 21:18:02 INFO namenode.FSNamesystem: isPermissionEnabled = false
18/09/05 21:18:02 INFO namenode.FSNamesystem: Determined nameservice ID: ns1
18/09/05 21:18:02 INFO namenode.FSNamesystem: HA Enabled: true
18/09/05 21:18:02 INFO namenode.FSNamesystem: Append Enabled: true
18/09/05 21:18:02 INFO util.GSet: Computing capacity for map INodeMap
18/09/05 21:18:02 INFO util.GSet: VM type       = 64-bit
18/09/05 21:18:02 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
18/09/05 21:18:02 INFO util.GSet: capacity      = 2^20 = 1048576 entries
18/09/05 21:18:02 INFO namenode.FSDirectory: ACLs enabled? false
18/09/05 21:18:02 INFO namenode.FSDirectory: XAttrs enabled? true
18/09/05 21:18:02 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
18/09/05 21:18:02 INFO namenode.NameNode: Caching file names occuring more than 10 times
18/09/05 21:18:02 INFO util.GSet: Computing capacity for map cachedBlocks
18/09/05 21:18:02 INFO util.GSet: VM type       = 64-bit
18/09/05 21:18:02 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
18/09/05 21:18:02 INFO util.GSet: capacity      = 2^18 = 262144 entries
18/09/05 21:18:03 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
18/09/05 21:18:03 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
18/09/05 21:18:03 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
18/09/05 21:18:03 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
18/09/05 21:18:03 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
18/09/05 21:18:03 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
18/09/05 21:18:03 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
18/09/05 21:18:03 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18/09/05 21:18:03 INFO util.GSet: Computing capacity for map NameNodeRetryCache
18/09/05 21:18:03 INFO util.GSet: VM type       = 64-bit
18/09/05 21:18:03 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
18/09/05 21:18:03 INFO util.GSet: capacity      = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /root/hadoop/dfs/name ? (Y or N) Y
18/09/05 21:18:07 INFO ipc.Client: Retrying connect to server: master1/192.168.89.132:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:07 INFO ipc.Client: Retrying connect to server: master/192.168.89.128:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:08 INFO ipc.Client: Retrying connect to server: master1/192.168.89.132:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:08 INFO ipc.Client: Retrying connect to server: master/192.168.89.128:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:09 INFO ipc.Client: Retrying connect to server: master1/192.168.89.132:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:09 INFO ipc.Client: Retrying connect to server: master/192.168.89.128:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:10 INFO ipc.Client: Retrying connect to server: master1/192.168.89.132:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:10 INFO ipc.Client: Retrying connect to server: master/192.168.89.128:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:11 INFO ipc.Client: Retrying connect to server: master1/192.168.89.132:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:11 INFO ipc.Client: Retrying connect to server: master/192.168.89.128:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:12 INFO ipc.Client: Retrying connect to server: master/192.168.89.128:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:12 INFO ipc.Client: Retrying connect to server: master1/192.168.89.132:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:13 INFO ipc.Client: Retrying connect to server: master1/192.168.89.132:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:13 INFO ipc.Client: Retrying connect to server: master/192.168.89.128:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:14 INFO ipc.Client: Retrying connect to server: master/192.168.89.128:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:14 INFO ipc.Client: Retrying connect to server: master1/192.168.89.132:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:15 INFO ipc.Client: Retrying connect to server: master/192.168.89.128:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:15 INFO ipc.Client: Retrying connect to server: master1/192.168.89.132:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:16 INFO ipc.Client: Retrying connect to server: master/192.168.89.128:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:16 INFO ipc.Client: Retrying connect to server: master1/192.168.89.132:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/09/05 21:18:16 WARN namenode.NameNode: Encountered exception during format: 
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 3 successful responses:
192.168.89.129:8485: false
192.168.89.131:8485: false
192.168.89.130:8485: false
1 exceptions thrown:
192.168.89.128:8485: Call From master/192.168.89.128 to master:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
    at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:901)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1011)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1457)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1582)
18/09/05 21:18:16 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 3 successful responses:
192.168.89.129:8485: false
192.168.89.131:8485: false
192.168.89.130:8485: false
1 exceptions thrown:
192.168.89.128:8485: Call From master/192.168.89.128 to master:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
    at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:901)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1011)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1457)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1582)
18/09/05 21:18:16 INFO util.ExitUtil: Exiting with status 1
18/09/05 21:18:16 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.89.128
************************************************************/

 

原因分析:HA的集群必须依赖于JournalNode,如果NameNode无法获取JournalNode Quorum,HDFS则会无法格式化或无法启动。因此配置文件hdfs-site.xml至关重要,由于我在其中配置了namenode节点master为元数据存储在journalnode的路径,因此必须先启动master的journalnode线程。如果没有在master中配置,则在格式化namenode时不需要在master中启动journalnode。

解决办法:在namenode上启动journalnode线程后,再次尝试格式化namenode成功!

<think>好的,我现在需要解决用户在使用IntelliJ IDEA从Hadoop下载文件时遇到的java.io.IOException错误,具体是“Incomplete HDFS URI, no host”的问题。首先,我得理解这个错误的原因。HDFS的URI通常格式是hdfs://<namenode_host>:<port>/path,如果URI中没有指定主机名和端口,就会导致这个错误。可能用户在使用Hadoop的API时,文件路径没有正确配置,导致系统无法解析正确的HDFS地址。 接下来,我需要回忆Hadoop配置的相关知识。Hadoop的配置文件中,比如core-site.xml,通常会设置fs.defaultFS属性来指定默认的HDFS地址。如果用户没有在代码中显式指定HDFS的主机名和端口,程序会尝试使用默认配置,但如果配置不正确或者代码中没有正确加载配置,就可能出现这个问题。 然后,我需要考虑用户的具体使用场景。用户可能是在IntelliJ IDEA中编写了一个Java程序,用来从HDFS下载文件,但在运行时遇到了这个异常。可能的情况包括:文件路径字符串没有包含完整的HDFS URI,或者在代码中没有正确初始化Configuration对象,导致无法读取Hadoop的配置文件。 接着,我应该思考解决方案的步骤。首先,检查用户代码中使用的文件路径是否正确,是否缺少hdfs://前缀和NameNode的主机名及端口。其次,确认Hadoop的配置文件是否正确,尤其是core-site.xml中的fs.defaultFS设置。然后,查看代码中是否显式设置了fs.defaultFS,或者在初始化FileSystem时是否正确指定了URI。此外,还需要检查依赖的Hadoop库版本是否兼容,以及网络连接是否正常,确保能够访问HDFS服务。 另外,用户可能在代码中使用了类似FileSystem.get(conf)的方法,这时候如果conf中没有正确配置fs.defaultFS,或者路径是相对路径,就会导致这个问题。因此,需要确保在创建Path对象时使用完整的HDFS URI,或者在Configuration对象中正确设置默认的文件系统。 可能还需要考虑用户的开发环境是否正确配置了Hadoop的环境变量,比如HADOOP_HOME,以及是否将Hadoop的配置文件(如core-site.xml, hdfs-site.xml)放入了项目的资源目录中,使得程序能够正确读取这些配置。 还有,用户可能在代码中使用了错误的API方法,比如直接使用本地文件系统的路径格式,而没有指定HDFS的协议头,导致Hadoop客户端无法识别正确的文件系统类型。这种情况下,路径应该以hdfs://开头,明确指定NameNode的地址和端口。 最后,我需要整理这些思路,分步骤给出解决方案,包括检查URI格式、验证配置文件、设置Configuration、检查依赖和网络等。同时,生成相关问题,帮助用户进一步排查可能的相关问题,比如权限问题、Hadoop服务状态等。</think>### Idea Hadoop 下载文件出现 `java.io.IOException: Incomplete HDFS URI, no host` 的解决方案 此错误通常是由于**HDFS文件路径格式不正确**或**Hadoop配置未正确加载**导致的。以下是分步解决方案: --- #### 1. **检查HDFS路径格式** - **错误示例**:`Path path = new Path("/user/test/file.txt");` (缺少HDFS协议头和NameNode地址) - **正确格式**: $$Path\ path = new\ Path("hdfs://namenode-host:8020/user/test/file.txt");$$ 或通过配置文件隐式声明(见步骤2)[^1]。 - **说明**:HDFS URI需包含协议头(`hdfs://`)、NameNode主机名和端口(默认8020)。 --- #### 2. **验证Hadoop配置文件** - **核心配置文件**:确保项目的 `resources` 目录下包含 `core-site.xml` 和 `hdfs-site.xml`,且 `core-site.xml` 中已定义默认文件系统: ```xml <property> <name>fs.defaultFS</name> <value>hdfs://namenode-host:8020</value> </property> ``` - **代码中显式加载配置**(可选): ```java Configuration conf = new Configuration(); conf.addResource(new Path("path/to/core-site.xml")); conf.addResource(new Path("path/to/hdfs-site.xml")); FileSystem fs = FileSystem.get(conf); // 自动读取配置中的fs.defaultFS ``` --- #### 3. **确保依赖库完整** - **Maven依赖示例**: ```xml <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>3.3.6</version> <!-- 与Hadoop集群版本一致 --> </dependency> ``` - **注意**:版本不一致可能导致协议兼容性问题[^2]。 --- #### 4. **检查网络与权限** - **防火墙**:确认IDE所在机器可访问NameNode的8020端口。 - **HDFS权限**:尝试添加权限验证: ```java System.setProperty("HADOOP_USER_NAME", "hadoop-user"); // 指定操作用户 ``` --- #### 5. **完整代码示例** ```java public class HDFSDownloader { public static void main(String[] args) throws IOException { Configuration conf = new Configuration(); // 若未加载配置文件,可显式设置URI conf.set("fs.defaultFS", "hdfs://namenode-host:8020"); try (FileSystem fs = FileSystem.get(conf)) { Path hdfsPath = new Path("hdfs://namenode-host:8020/user/test/file.txt"); Path localPath = new Path("/local/path/file.txt"); fs.copyToLocalFile(hdfsPath, localPath); } } } ``` ---
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值