MR job failed with below error:
java.lang.RuntimeException: Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/root/c6d2b6f4-3b63-43c6-b9a5-522c3421b579/hive_2016-11-23_08-36-35_507_2401199014579657933-3/-mr-10012/a8f73768-fbe2-4b9d-8c48-3cd39028be33/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 48 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1595)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3287)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:677)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:213)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:485)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
solved:
check to see whether the hdfs has enough space, usually it caused by hdfs does not have enough space for caching.
remove some unused data in hdfs solved this issue.
本文介绍了一个 Hadoop MRJob 在运行过程中遇到的缓存错误:文件无法复制到足够的节点上,通常是因为 HDFS 空间不足导致。解决办法是清理 HDFS 中的未使用数据。
3699

被折叠的 条评论
为什么被折叠?



