Configured Capacity: 0 (0 B) Present Capacity: 0 (0 B) DFS Remaining: 0 (0 B) DFS Used: 0 (0 B) D

本文解决了一个HDFS集群中运行hdfsdfsadmin-report命令后显示所有容量均为0的问题。问题根源在于hosts文件配置错误,master节点的IP设置中保留了127.0.0.1 localhost或master的条目,导致HDFS无法正确识别集群节点。通过删除这些条目,HDFS能够恢复正常的数据报告。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

运行hdfs dfsadmin -report没有报错,但是所有项都为0;
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: 0.00%
Replicated Blocks:
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0
Erasure Coded Block Groups:
Low redundancy block groups: 0
Block groups with corrupt internal blocks: 0
Missing block groups: 0
Pending deletion blocks: 0

每一项都为0
问题在于hosts文件,应该是master节点的ip中除了集群ip之外,原来的127.0.0.1 master或localhost没有被删除,删除这两行即可。
恢复正常结果

[root@hadoop04 ~]# hadoop jar ./film2.jar CleanDriver /film/input /film/outputs/cleandata 25/03/15 15:11:24 INFO client.RMProxy: Connecting to ResourceManager at hadoop04/192.168.100.104:8032 Exception in thread "main" org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://hadoop04:9000/film/input already exists at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146) at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at CleanDriver.main(CleanDriver.java:36) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:226) at org.apache.hadoop.util.RunJar.main(RunJar.java:141) [root@hadoop04 ~]# hdfs dfsadmin -report Configured Capacity: 54716792832 (50.96 GB) Present Capacity: 45412380672 (42.29 GB) DFS Remaining: 45412196352 (42.29 GB) DFS Used: 184320 (180 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 ------------------------------------------------- Live datanodes (3): Name: 192.168.100.105:50010 (hadoop05) Hostname: hadoop05 Decommission Status : Normal Configured Capacity: 18238930944 (16.99 GB) DFS Used: 61440 (60 KB) Non DFS Used: 3007356928 (2.80 GB) DFS Remaining: 15231512576 (14.19 GB) DFS Used%: 0.00% DFS Remaining%: 83.51% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Sat Mar 15 15:11:46 CST 2025 Name: 192.168.100.104:50010 (hadoop04) Hostname: hadoop04 Decommission Status : Normal Configured Capacity: 18238930944 (16.99 GB) DFS Used: 61440 (60 KB) Non DFS Used: 3289276416 (3.06 GB) DFS Remaining: 14949593088 (13.92 GB) DFS Used%: 0.00% DFS Remaining%: 81.97% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Sat Mar 15 15:11:46 CST 2025 Name: 192.168.100.106:50010 (hadoop06) Hostname: hadoop06 Decommission Status : Normal Configured Capacity: 18238930944 (16.99 GB) DFS Used: 61440 (60 KB) Non DFS Used: 3007778816 (2.80 GB) DFS Remaining: 15231090688 (14.19 GB) DFS Used%: 0.00% DFS Remaining%: 83.51% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Sat Mar 15 15:11:46 CST 2025 [root@hadoop04 ~]#
最新发布
03-16
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值