hadoop Exception 总结...持续更新

本文介绍了在使用Hadoop时遇到的连接被拒绝错误,并提供了具体的解决方案。通过在配置文件hdfs-site.xml中指定NameNode和Secondary NameNode的HTTP地址及端口,解决了因未正确配置网络地址而导致的服务不可达问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1.  PriviledgedActionException as:oracle cause:java.net.ConnectException: Connection refused

执行health check 命令: hadoop fsck /
Warning: $HADOOP_HOME is deprecated.

13/10/21 13:11:24 ERROR security.UserGroupInformation: PriviledgedActionException as:oracle cause:java.net.ConnectException: Connection refused
Exception in thread "main" java.net.ConnectException: Connection refused
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:579)
        at java.net.Socket.connect(Socket.java:528)
        at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
        at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
        at sun.net.www.http.HttpClient.New(HttpClient.java:308)
        at sun.net.www.http.HttpClient.New(HttpClient.java:326)
        at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
        at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
        at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
        at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1300)
        at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:142)
        at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:109)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:109)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:183)

解决办法,增加以下内容在hdfs-site.xml

<property>
                <name>dfs.http.address</name>
                <value>192.168.0.150:50070</value>
                <description>
                The address and the base port where the dfs namenode web ui will listen on.
                If the port is 0 then the server will start on a free port.
                </description>
        </property>

        <property>
                <name>dfs.secondary.http.address</name>
                <value>192.168.9.151:50090</value>
                <description>
                The secondary namenode http server address and port.
                If the port is 0 then the server will start on a free port.
                </description>
        </property>

 

2.   ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint:

[oracle@Slave1 logs]$ tail -f hadoop-oracle-secondarynamenode-Slave1.Hadoop.log
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$4.run(SecondaryNameNode.java:420)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.downloadCheckpointFiles(SecondaryNameNode.java:420)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:520)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:396)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:360)
        at java.lang.Thread.run(Thread.java:724)

2013-10-21 13:35:01,027 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Opening connection to http://0.0.0.0:50070/getimage?getimage=1
2013-10-21 13:35:01,030 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:oracle cause:java.net.ConnectException: Connection refused
2013-10-21 13:35:01,030 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint:
2013-10-21 13:35:01,030 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: java.net.ConnectException: Connection refused
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:579)
        at java.net.Socket.connect(Socket.java:528)
        at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
        at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
        at sun.net.www.http.HttpClient.New(HttpClient.java:308)
        at sun.net.www.http.HttpClient.New(HttpClient.java:326)
        at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
        at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
        at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
        at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1300)
        at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:177)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$4.run(SecondaryNameNode.java:431)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$4.run(SecondaryNameNode.java:420)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.downloadCheckpointFiles(SecondaryNameNode.java:420)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:520)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:396)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:360)
        at java.lang.Thread.run(Thread.java:724)

2013-10-21 13:37:16,758 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: SHUTDOWN_MSG:

解决办法,增加以下内容在hdfs-site.xml

<property>
                <name>dfs.http.address</name>
                <value>192.168.0.150:50070</value>
                <description>
                The address and the base port where the dfs namenode web ui will listen on.
                If the port is 0 then the server will start on a free port.
                </description>
        </property>

        <property>
                <name>dfs.secondary.http.address</name>
                <value>192.168.9.151:50090</value>
                <description>
                The secondary namenode http server address and port.
                If the port is 0 then the server will start on a free port.
                </description>
        </property>

base/MasterData/oldWALs, maxLogs=10 2025-06-24 21:56:39,697 INFO [master/hadoop102:16000:becomeActiveMaster] wal.AbstractFSWAL: Closed WAL: AsyncFSWAL hadoop102%2C16000%2C1750773392926:(num 1750773399658) 2025-06-24 21:56:39,701 ERROR [master/hadoop102:16000:becomeActiveMaster] master.HMaster: Failed to become active master java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:535) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$400(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:615) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:610) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:623) at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:53) at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:190) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:116) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:719) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:128) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:577) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:518) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.open(MasterRegion.java:263) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:344) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:856) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2199) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:529) at java.lang.Thread.run(Thread.java:750) 2025-06-24 21:56:39,702 ERROR [master/hadoop102:16000:becomeActiveMaster] master.HMaster: ***** ABORTING master hadoop102,16000,1750773392926: Unhandled exception. Starting shutdown. ***** java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:535) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$400(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:615) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:610) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:623) at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:53) at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:190) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:116) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:719) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:128) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:577) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:518) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.open(MasterRegion.java:263) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:344) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:856) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2199) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:529) at java.lang.Thread.run(Thread.java:750) 2025-06-24 21:56:39,703 INFO [master/hadoop102:16000:becomeActiveMaster] regionserver.HRegionServer: ***** STOPPING region server 'hadoop102,16000,1750773392926' ***** 2025-06-24 21:56:39,703 INFO [master/hadoop102:16000:becomeActiveMaster] regionserver.HRegionServer: STOPPED: Stopped by master/hadoop102:16000:becomeActiveMaster 2025-06-24 21:56:40,607 INFO [master/hadoop102:16000] ipc.NettyRpcServer: Stopping server on /192.168.10.102:16000 2025-06-24 21:56:40,628 INFO [master/hadoop102:16000] regionserver.HRegionServer: Stopping infoServer 2025-06-24 21:56:40,652 INFO [master/hadoop102:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.w.WebAppContext@45acdd11{master,/,null,STOPPED}{file:/opt/module/hbase-2.4.18/hbase-webapps/master} 2025-06-24 21:56:40,658 INFO [master/hadoop102:16000] server.AbstractConnector: Stopped ServerConnector@7efd28bd{HTTP/1.1, (http/1.1)}{0.0.0.0:16010} 2025-06-24 21:56:40,659 INFO [master/hadoop102:16000] server.session: node0 Stopped scavenging 2025-06-24 21:56:40,659 INFO [master/hadoop102:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f7da3d3{static,/static,file:///opt/module/hbase-2.4.18/hbase-webapps/static/,STOPPED} 2025-06-24 21:56:40,660 INFO [master/hadoop102:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2b10ace9{logs,/logs,file:///opt/module/hbase-2.4.18/logs/,STOPPED} 2025-06-24 21:56:40,664 INFO [master/hadoop102:16000] regionserver.HRegionServer: aborting server hadoop102,16000,1750773392926 2025-06-24 21:56:40,665 INFO [master/hadoop102:16000] regionserver.HRegionServer: stopping server hadoop102,16000,1750773392926; all regions closed. 2025-06-24 21:56:40,665 INFO [master/hadoop102:16000] hbase.ChoreService: Chore service for: master/hadoop102:16000 had [] on shutdown 2025-06-24 21:56:40,672 WARN [master/hadoop102:16000] master.ActiveMasterManager: Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2025-06-24 21:56:40,782 INFO [ReadOnlyZKClient-hadoop102:2181,hadoop103:2181,hadoop104:2181@0x1a9293ba] zookeeper.ZooKeeper: Session: 0x20000754c8c0002 closed 2025-06-24 21:56:40,782 INFO [ReadOnlyZKClient-hadoop102:2181,hadoop103:2181,hadoop104:2181@0x1a9293ba-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x20000754c8c0002 2025-06-24 21:56:40,797 INFO [master/hadoop102:16000] zookeeper.ZooKeeper: Session: 0x100007708c00000 closed 2025-06-24 21:56:40,797 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x100007708c00000 2025-06-24 21:56:40,797 INFO [master/hadoop102:16000] regionserver.HRegionServer: Exiting; stopping=hadoop102,16000,1750773392926; zookeeper connection closed. 2025-06-24 21:56:40,798 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: HMaster Aborted at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:254) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:145) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:140) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2969) [manager1@hadoop102 logs]$
06-25
上述情况均正常的情况下, [root@hadoop04 ~]# hdfs dfs -ls /film/outputs/cleandata ls: `/film/outputs/cleandata': No such file or directory [root@hadoop04 ~]# hadoop jar film001.jar CleanDriver /film/input /film/outputs/cleandata 25/03/14 18:21:36 INFO client.RMProxy: Connecting to ResourceManager at hadoop04/192.168.100.104:8032 Exception in thread "main" org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://hadoop04:9000/film/input already exists at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146) at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at CleanDriver.main(CleanDriver.java:35) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:226) at org.apache.hadoop.util.RunJar.main(RunJar.java:141) [root@hadoop04 ~]#
03-15
[hadoop@node1 hive]$ bin/hive which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/export/server/jdk1.8/bin:/export/server/flume/bin:/export/server/hadoop/bin:/export/server/hive/bin:/export/server/spark/bin:/export/server/spark/sbin:/root/myenv:/export/server/jdk/bin:/export/server/hadoop/bin:/export/server/hadoop/sbin:/export/server/hive/bin:/export/server/spark/bin:/export/server/spark/sbin:/home/hadoop/.local/bin:/home/hadoop/bin) Hive Session ID = dcb273c2-25d3-4bf5-baeb-07905e307750 Logging initialized using configuration in jar:file:/export/server/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /tmp/hive/hadoop/dcb273c2-25d3-4bf5-baeb-07905e307750. Name node is in safe mode. The reported blocks 773 needs additional 1 blocks to reach the threshold 0.9990 of total blocks 775. The minimum number of live datanodes is not required. Safe mode will be turned off automatically once the thresholds have been reached. NamenodeHostName:node1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1661) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1648) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3530) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1173) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:750) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621) at org.a
最新发布
08-06
报了这个错Error while compiling statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 3, vertexId=vertex_1742903658747_0760_9_01, diagnostics=[Vertex vertex_1742903658747_0760_9_01 [Map 3] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: b initializer failed, vertex=vertex_1742903658747_0760_9_01 [Map 3], java.lang.RuntimeException: ORC split generation failed with exception: java.io.IOException: java.io.FileNotFoundException: File hdfs://nameservice-uat/apps/ratan/ratan_app/ratan_bcdf_trade_app/dt=20230501/tpsystem=MUREX/product=CRD/status=LIVE does not exist. at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1943) at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:2030) at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:542) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:850) at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:250) at org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$runInitializer$3(RootInputInitializerManager.java:203) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898) at org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializer(RootInputInitializerManager.java:196) Vertex killed, vertexName=Reducer 2, vertexId=vertex_1742903658747_0760_9_02, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1742903658747_0760_9_02 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
03-27
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值