hadoop 错误提示 解决范围

本文记录了在使用Hadoop过程中遇到的一个警告错误,即在向云端上传文件时出现复制节点数不足的问题,并给出了可能的原因及解决方案。问题源于磁盘空间不足,清理回收站后问题得到解决。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

hadoop 出现 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc...

在hadoop向云端传入文件时

hadoop@hadoop1:~$ hadoop fs -mkdir input
hadoop@hadoop1:~$ hadoop fs -put input/* input

显示:

hadoop@hadoop1:~$ hadoop fs -mkdir input
hadoop@hadoop1:~$ hadoop fs -put input/* input
17/08/30 19:00:31 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/input/21540.txt could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1622)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:729)
    at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

    at org.apache.hadoop.ipc.Client.call(Client.java:1092)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
    at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3691)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3551)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2754)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2994)

17/08/30 19:00:31 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
17/08/30 19:00:31 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/hadoop/input/21540.txt" - Aborting...
put: java.io.IOException: File /user/hadoop/input/21540.txt could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1622)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:729)
    at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)


17/08/30 19:00:31 ERROR hdfs.DFSClient: Exception closing file /user/hadoop/input/21540.txt : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/input/21540.txt could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1622)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:729)
    at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/input/21540.txt could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1622)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:729)
    at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

    at org.apache.hadoop.ipc.Client.call(Client.java:1092)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
    at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3691)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3551)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2754)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2994)

我从网上试过很多方法,有的说要关闭防火墙,可是我试了试发现并没用,

解决方法:可能是因为\home空间不足导致的

我把回收站里的东西清空释放一部分空间,问题就解决了。

hadoop@hadoop1:~$ hadoop fs -mkdir input
hadoop@hadoop1:~$ hadoop fs -put input/* input
hadoop@hadoop1:~$ hadoop jar wordcount1.jar input out
17/08/30 19:23:14 INFO input.FileInputFormat: Total input paths to process : 5
17/08/30 19:23:14 INFO util.NativeCodeLoader: Loaded the native-hadoop library
17/08/30 19:23:14 WARN snappy.LoadSnappy: Snappy native library not loaded
17/08/30 19:23:15 INFO mapred.JobClient: Running job: job_201708301922_0001
17/08/30 19:23:16 INFO mapred.JobClient:  map 0% reduce 0%
17/08/30 19:23:22 INFO mapred.JobClient:  map 40% reduce 0%
17/08/30 19:23:25 INFO mapred.JobClient:  map 80% reduce 0%
17/08/30 19:23:26 INFO mapred.JobClient:  map 100% reduce 0%
17/08/30 19:23:30 INFO mapred.JobClient:  map 100% reduce 33%
17/08/30 19:23:32 INFO mapred.JobClient:  map 100% reduce 100%
17/08/30 19:23:32 INFO mapred.JobClient: Job complete: job_201708301922_0001
17/08/30 19:23:32 INFO mapred.JobClient: Counters: 29
17/08/30 19:23:32 INFO mapred.JobClient:   Job Counters 
17/08/30 19:23:32 INFO mapred.JobClient:     Launched reduce tasks=1
17/08/30 19:23:32 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=9971
17/08/30 19:23:32 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
17/08/30 19:23:32 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
17/08/30 19:23:32 INFO mapred.JobClient:     Launched map tasks=5
17/08/30 19:23:32 INFO mapred.JobClient:     Data-local map tasks=1
17/08/30 19:23:32 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=9425
17/08/30 19:23:32 INFO mapred.JobClient:   File Output Format Counters 
17/08/30 19:23:32 INFO mapred.JobClient:     Bytes Written=25
17/08/30 19:23:32 INFO mapred.JobClient:   FileSystemCounters
17/08/30 19:23:32 INFO mapred.JobClient:     FILE_BYTES_READ=55
17/08/30 19:23:32 INFO mapred.JobClient:     HDFS_BYTES_READ=584
17/08/30 19:23:32 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=143341
17/08/30 19:23:32 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=25
17/08/30 19:23:32 INFO mapred.JobClient:   File Input Format Counters 
17/08/30 19:23:32 INFO mapred.JobClient:     Bytes Read=20
17/08/30 19:23:32 INFO mapred.JobClient:   Map-Reduce Framework
17/08/30 19:23:32 INFO mapred.JobClient:     Map output materialized bytes=79
17/08/30 19:23:32 INFO mapred.JobClient:     Map input records=10
17/08/30 19:23:32 INFO mapred.JobClient:     Reduce shuffle bytes=79
17/08/30 19:23:32 INFO mapred.JobClient:     Spilled Records=12
17/08/30 19:23:32 INFO mapred.JobClient:     Map output bytes=55
17/08/30 19:23:32 INFO mapred.JobClient:     Total committed heap usage (bytes)=880803840
17/08/30 19:23:32 INFO mapred.JobClient:     CPU time spent (ms)=1860
17/08/30 19:23:32 INFO mapred.JobClient:     Combine input records=9
17/08/30 19:23:32 INFO mapred.JobClient:     SPLIT_RAW_BYTES=564
17/08/30 19:23:32 INFO mapred.JobClient:     Reduce input records=6
17/08/30 19:23:32 INFO mapred.JobClient:     Reduce input groups=6
17/08/30 19:23:32 INFO mapred.JobClient:     Combine output records=6
17/08/30 19:23:32 INFO mapred.JobClient:     Physical memory (bytes) snapshot=1011617792
17/08/30 19:23:32 INFO mapred.JobClient:     Reduce output records=6
17/08/30 19:23:32 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=4073361408
17/08/30 19:23:32 INFO mapred.JobClient:     Map output records=9
time=19763
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值