Hadoop启动dfs时报错Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namen

Hadoop2.7.4安装之后./sbin/start-dfs.sh报错:

Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
Error: Cannot find configuration directory: /usr/local/hadoop/conf
Error: Cannot find configuration directory: /usr/local/hadoop/conf
Starting secondary namenodes [0.0.0.0]
Error: Cannot find configuration directory: /usr/local/hadoop/conf


原因是Hadoop之前的版本hadoop-env.sh, hdfs-site.xml, core-site.xml, mapred-site.xml这些配置文件都是在/usr/local/hadoop/conf

较新的版本在/usr/local/hadoop/etc/hadoop

需要设置环境变量export HADOOP_CONF_DIR = $HADOOP_HOME/etc/hadoop

输入echo $HADOOP_CONF_DIR验证配置的位置

xyq@ubuntu:~$ # 检查 Hadoop 的 core-site.xml 配置 xyq@ubuntu:~$ cat $HADOOP_HOME/etc/hadoop/core-site.xml | grep -A 1 "fs.defaultFS" <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> xyq@ubuntu:~$ # 检查 NameNode 状态 xyq@ubuntu:~$ hdfs dfsadmin -safemode get Safe mode is OFF xyq@ubuntu:~$ xyq@ubuntu:~$ # 如果处于安全模式,尝试离开 xyq@ubuntu:~$ hdfs dfsadmin -safemode leave Safe mode is OFF xyq@ubuntu:~$ xyq@ubuntu:~$ # 检查 NameNode 日志 xyq@ubuntu:~$ tail -n 50 $HADOOP_HOME/logs/hadoop-xyq-namenode-ubuntu.log 2025-09-28 23:33:54,386 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:9000 2025-09-28 23:33:54,390 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state 2025-09-28 23:33:54,390 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Initializing quota with 12 thread(s) 2025-09-28 23:33:54,401 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Quota initialization completed in 11 milliseconds name space=1 storage space=0 storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0 2025-09-28 23:33:54,415 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds 2025-09-28 23:33:57,025 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:9866, datanodeUuid=8438e1a3-a807-46a0-ac7f-c66ca8d83952, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-6898e686-facb-4c51-9250-fae7b1097a80;nsid=1985427683;c=1759127628880) storage 8438e1a3-a807-46a0-ac7f-c66ca8d83952 2025-09-28 23:33:57,026 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:9866 2025-09-28 23:33:57,027 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockReportLeaseManager: Registered DN 8438e1a3-a807-46a0-ac7f-c66ca8d83952 (127.0.0.1:9866). 2025-09-28 23:33:57,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-d30c1b17-57f9-4228-80d5-454f4fd2fef2 for DN 127.0.0.1:9866 2025-09-28 23:33:57,167 INFO BlockStateChange: BLOCK* processReport 0xe7740e00b597b3f with lease ID 0xb4000eaed9f78008: Processing first storage report for DS-d30c1b17-57f9-4228-80d5-454f4fd2fef2 from datanode DatanodeRegistration(127.0.0.1:9866, datanodeUuid=8438e1a3-a807-46a0-ac7f-c66ca8d83952, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-6898e686-facb-4c51-9250-fae7b1097a80;nsid=1985427683;c=1759127628880) 2025-09-28 23:33:57,169 INFO BlockStateChange: BLOCK* processReport 0xe7740e00b597b3f with lease ID 0xb4000eaed9f78008: from storage DS-d30c1b17-57f9-4228-80d5-454f4fd2fef2 node DatanodeRegistration(127.0.0.1:9866, datanodeUuid=8438e1a3-a807-46a0-ac7f-c66ca8d83952, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-6898e686-facb-4c51-9250-fae7b1097a80;nsid=1985427683;c=1759127628880), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2025-09-28 23:50:16,047 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 23 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 5 2025-09-28 23:50:16,216 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:9866 for /hbase/.tmp/hbase.version 2025-09-28 23:50:16,574 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* blk_1073741825_1001 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /hbase/.tmp/hbase.version 2025-09-28 23:50:16,990 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /hbase/.tmp/hbase.version is closed by DFSClient_NONMAPREDUCE_970287547_1 2025-09-28 23:50:17,066 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741826_1002, replicas=127.0.0.1:9866 for /hbase/.tmp/hbase.id 2025-09-28 23:50:17,122 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /hbase/.tmp/hbase.id is closed by DFSClient_NONMAPREDUCE_970287547_1 2025-09-28 23:50:17,517 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741827_1003, replicas=127.0.0.1:9866 for /hbase/.tmp/hbase-hbck.lock 2025-09-28 23:50:17,535 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* blk_1073741827_1003 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /hbase/.tmp/hbase-hbck.lock 2025-09-28 23:50:17,939 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /hbase/.tmp/hbase-hbck.lock is closed by DFSClient_NONMAPREDUCE_970287547_1 2025-09-28 23:50:18,107 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741828_1004, replicas=127.0.0.1:9866 for /hbase/MasterData/data/master/store/.tabledesc/.tableinfo.0000000001.913 2025-09-28 23:50:18,127 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /hbase/MasterData/data/master/store/.tabledesc/.tableinfo.0000000001.913 is closed by DFSClient_NONMAPREDUCE_970287547_1 2025-09-28 23:50:18,200 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741829_1005, replicas=127.0.0.1:9866 for /hbase/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.regioninfo 2025-09-28 23:50:18,222 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* blk_1073741829_1005 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /hbase/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.regioninfo 2025-09-28 23:50:18,626 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /hbase/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.regioninfo is closed by DFSClient_NONMAPREDUCE_970287547_1 2025-09-28 23:50:18,766 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741830_1006, replicas=127.0.0.1:9866 for /hbase/MasterData/WALs/ubuntu,16000,1759128611808/ubuntu%2C16000%2C1759128611808.1759128618722 2025-09-29 00:03:38,421 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 55 Total time for transactions(ms): 30 Number of transactions batched in Syncs: 9 Number of syncs: 46 SyncTimes(ms): 38 2025-09-29 00:03:38,574 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741831_1007, replicas=127.0.0.1:9866 for /hbase/.tmp/hbase-hbck.lock 2025-09-29 00:03:38,608 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /hbase/.tmp/hbase-hbck.lock is closed by DFSClient_NONMAPREDUCE_-989801063_1 2025-09-29 00:03:38,788 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: recoverLease: [Lease. Holder: DFSClient_NONMAPREDUCE_970287547_1, pending creates: 1], src=/hbase/MasterData/WALs/ubuntu,16000,1759128611808-dead/ubuntu%2C16000%2C1759128611808.1759128618722 from client DFSClient_NONMAPREDUCE_970287547_1 2025-09-29 00:03:38,788 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease. Holder: DFSClient_NONMAPREDUCE_970287547_1, pending creates: 1], src=/hbase/MasterData/WALs/ubuntu,16000,1759128611808-dead/ubuntu%2C16000%2C1759128611808.1759128618722 2025-09-29 00:03:38,788 WARN org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.internalReleaseLease: File /hbase/MasterData/WALs/ubuntu,16000,1759128611808-dead/ubuntu%2C16000%2C1759128611808.1759128618722 has not been closed. Lease recovery is in progress. RecoveryId = 1008 for block blk_1073741830_1006 2025-09-29 00:03:39,619 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: commitBlockSynchronization(oldBlock=BP-1684521762-127.0.1.1-1759127628880:blk_1073741830_1006, newgenerationstamp=1008, newlength=0, newtargets=[], closeFile=true, deleteBlock=true) 2025-09-29 00:03:39,623 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: commitBlockSynchronization(oldBlock=BP-1684521762-127.0.1.1-1759127628880:blk_1073741830_1006, file=/hbase/MasterData/WALs/ubuntu,16000,1759128611808-dead/ubuntu%2C16000%2C1759128611808.1759128618722, newgenerationstamp=1008, newlength=0, newtargets=[]) successful 2025-09-29 00:03:42,886 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741832_1009, replicas=127.0.0.1:9866 for /hbase/MasterData/WALs/ubuntu,16000,1759129414156/ubuntu%2C16000%2C1759129414156.1759129422859 2025-09-29 00:16:00,250 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1 2025-09-29 00:16:00,250 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2025-09-29 00:16:00,250 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 1, 73 2025-09-29 00:16:00,251 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 74 Total time for transactions(ms): 31 Number of transactions batched in Syncs: 12 Number of syncs: 62 SyncTimes(ms): 50 2025-09-29 00:16:00,252 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 74 Total time for transactions(ms): 31 Number of transactions batched in Syncs: 12 Number of syncs: 63 SyncTimes(ms): 51 2025-09-29 00:16:00,253 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /tmp/hadoop-xyq/dfs/name/current/edits_inprogress_0000000000000000001 -> /tmp/hadoop-xyq/dfs/name/current/edits_0000000000000000001-0000000000000000074 2025-09-29 00:16:00,271 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 75 2025-09-29 00:16:00,482 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Sending fileName: /tmp/hadoop-xyq/dfs/name/current/fsimage_0000000000000000000, fileSize: 398. Sent total: 398 bytes. Size of last segment intended to send: -1 bytes. 2025-09-29 00:16:00,508 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Sending fileName: /tmp/hadoop-xyq/dfs/name/current/edits_0000000000000000001-0000000000000000074, fileSize: 7094. Sent total: 7094 bytes. Size of last segment intended to send: -1 bytes. 2025-09-29 00:16:01,165 INFO org.apache.hadoop.hdfs.server.namenode.ImageServlet: Rejecting a fsimage due to small time delta and txnid delta. Time since previous checkpoint is 2532 expecting at least 2700 txnid delta since previous checkpoint is 74 expecting at least 1000000 2025-09-29 00:18:05,167 WARN org.apache.hadoop.ipc.Server: Incorrect RPC Header length from localhost:45498 / 127.0.0.1:45498. Expected: java.nio.HeapByteBuffer[pos=0 lim=4 cap=4]. Actual: java.nio.HeapByteBuffer[pos=0 lim=4 cap=4] 2025-09-29 00:19:32,199 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode is already OFF xyq@ubuntu:~$ # 查看 NameNode 实际监听的端口 xyq@ubuntu:~$ netstat -tlnp | grep java | grep 9000 (并非所有进程都能被检测到,所有非本用户的进程信息将不会显示,如果想看到所有信息,则必须切换到 root 用户) tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 10675/java xyq@ubuntu:~$ xyq@ubuntu:~$ # 查看 DataNode 监听的端口 xyq@ubuntu:~$ netstat -tlnp | grep java | grep 9866 (并非所有进程都能被检测到,所有非本用户的进程信息将不会显示,如果想看到所有信息,则必须切换到 root 用户) tcp 0 0 0.0.0.0:9866 0.0.0.0:* LISTEN 10834/java xyq@ubuntu:~$ base:001:0> list TABLE 2025-09-29 00:22:37,296 INFO [main] client.RpcRetryingCallerImpl (RpcRetryingCallerImpl.java:callWithRetries(130)) - Call exception, tries=6, retries=8, started=5677 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:3173) at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:1163) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102) at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) , details=, see https://s.apache.org/timeout 2025-09-29 00:22:41,346 INFO [main] client.RpcRetryingCallerImpl (RpcRetryingCallerImpl.java:callWithRetries(130)) - Call exception, tries=7, retries=8, started=9727 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:3173) at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:1163) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102) at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) , details=, see https://s.apache.org/timeout ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:3173) at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:1163) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102) at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) For usage try 'help "list"' Took 9.8913 seconds hbase:002:0> 如何解决上述hbase不可以使用的问题
最新发布
09-30
执行 `sbin/stop-dfs.sh && sbin/start-dfs.sh` 出现 `Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured` 错误,通常是由于 Hadoop 配置文件中缺少或错误配置NameNode 的 RPC 地址。解决方法如下: #### 1. 配置 `core-site.xml` 在 `$HADOOP_HOME/etc/hadoop/core-site.xml` 文件中添加或修改以下配置: ```xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration> ``` 这里的 `localhost` 可以替换为你的 NameNode 所在的主机名或 IP 地址,`9000` 是 NameNode 的 RPC 端口号,可根据实际情况调整。 #### 2. 配置 `hdfs-site.xml` 在 `$HADOOP_HOME/etc/hadoop/hdfs-site.xml` 文件中添加或修改以下配置: ```xml <configuration> <property> <name>dfs.namenode.rpc-address</name> <value>localhost:9000</value> </property> <property> <name>dfs.namenode.servicerpc-address</name> <value>localhost:9001</value> </property> </configuration> ``` 同样,`localhost` 替换为实际的 NameNode 主机名或 IP 地址,`9000` 和 `9001` 分别是 RPC 地址和服务 RPC 地址的端口号,可按需调整。 #### 3. 重新格式化 NameNode(可选) 如果以上配置修改后仍然无法解决问题,可以尝试重新格式化 NameNode: ```bash $HADOOP_HOME/bin/hdfs namenode -format ``` 注意,重新格式化会清除所有已有的 HDFS 数据,请谨慎操作。 #### 4. 再次启动 HDFS 完成上述配置后,再次执行以下命令启动 HDFS: ```bash sbin/stop-dfs.sh && sbin/start-dfs.sh ```
评论 2
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值