hadoop M/R 运行错误

本文记录了在使用Hadoop进行文件写入操作时遇到的EOFException和IOException问题,包括详细的错误日志及解决方法,指出应避免使用root用户执行Hadoop任务。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

2012-05-08 15:29:02,927 WARN org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block blk_-3688559193353374185_253276java.io.EOFExceptionat java.io.DataInputStream.readFully(DataInputStream.java:180)at java.io.DataInputStream.readLong(DataInputStream.java:399)at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:120)at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2956)2012-05-08 15:29:02,927 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Connection reset by peerat sun.nio.ch.FileDispatcher.write0(Native Method)at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:72)at sun.nio.ch.IOUtil.write(IOUtil.java:43)at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)at java.io.DataOutputStream.write(DataOutputStream.java:90)at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2848)2012-05-08 15:29:02,927 INFO org.apache.hadoop.hdfs.DFSClient: Error Recovery for block blk_-3688559193353374185_253276 waiting for responder to exit. 2012-05-08 15:29:02,927 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block blk_-3688559193353374185_253276 bad datanode[0] x.x.x.x:500102012-05-08 15:29:02,928 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block blk_-3688559193353374185_253276 in pipeline x.x.x.x:50010, x.x.x.x:50010: bad datanode x.x.x.x:500102012-05-08 15:29:02,975 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Broken pipeat sun.nio.ch.FileDispatcher.write0(Native Method)at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:72)at sun.nio.ch.IOUtil.write(IOUtil.java:43)at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)at java.io.DataOutputStream.write(DataOutputStream.java:90)at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2848)2012-05-08 15:29:02,975 WARN org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block blk_-3688559193353374185_253277java.io.IOException: Connection reset by peerat sun.nio.ch.FileDispatcher.read0(Native Method)at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:202)at sun.nio.ch.IOUtil.read(IOUtil.java:175)at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243)at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)at java.io.DataInputStream.readFully(DataInputStream.java:178)at java.io.DataInputStream.readLong(DataInputStream.java:399)at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:120)at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2956)2012-05-08 15:29:02,975 INFO org.apache.hadoop.hdfs.DFSClient: Error Recovery for block blk_-3688559193353374185_253277 waiting for responder to exit. 2012-05-08 15:29:02,976 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block blk_-3688559193353374185_253277 bad datanode[0] x.x.x.x:500102012-05-08 15:29:02,979 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-12012-05-08 15:29:02,981 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:java.io.IOException: All datanodes x.x.x.x:50010 are bad. Aborting...2012-05-08 15:29:02,982 WARN org.apache.hadoop.mapred.Child: Error running childjava.io.IOException: All datanodes x.x.x.x:50010 are bad. Aborting...at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3088)at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2627)at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2799)2012-05-08 15:29:02,984 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task




解决办法:使用hadoop用户执行Job,不要使用root用户。

[root@hadoop02 ~]# flume-ng agent --name a1 --conf conf/ --conf-file conf/avro-logger1.conf -Dflume.root.logger=INFO,console Info: Including Hadoop libraries found via (/opt/programs/hadoop/bin/hadoop) for HDFS access Info: Including Hive libraries found via (/opt/programs/hive) for Hive access + exec /opt/programs/jdk/bin/java -Xmx20m -Dflume.root.logger=INFO,console -cp 'conf/:/opt/programs/flume/lib/*:/opt/programs/hadoop/etc/hadoop:/opt/programs/hadoop/share/hadoop/common/lib/*:/opt/programs/hadoop/share/hadoop/common/*:/opt/programs/hadoop/share/hadoop/hdfs:/opt/programs/hadoop/share/hadoop/hdfs/lib/*:/opt/programs/hadoop/share/hadoop/hdfs/*:/opt/programs/hadoop/share/hadoop/mapreduce/lib/*:/opt/programs/hadoop/share/hadoop/mapreduce/*:/opt/programs/hadoop/share/hadoop/yarn:/opt/programs/hadoop/share/hadoop/yarn/lib/*:/opt/programs/hadoop/share/hadoop/yarn/*:/opt/programs/hive/lib/*' -Djava.library.path=:/opt/programs/hadoop/lib/native org.apache.flume.node.Application --name a1 --conf-file conf/avro-logger1.conf SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/programs/flume/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/programs/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/programs/hive/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 2025-07-05 02:07:25,550 ERROR node.Application: A fatal error occurred while running. Exception follows. org.apache.commons.cli.ParseException: The specified configuration file does not exist: /root/conf/avro-logger1.conf at org.apache.flume.node.Application.main(Application.java:342)
最新发布
07-06
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值