hadoop源码分析系列(四)——org.apache.hadoop.hdfs包之协议篇

HDFS协议与核心组件剖析
本文详细介绍了Hadoop HDFS中的关键协议和组件,包括DataTransferProtocol、ClientDatanodeProtocol及ClientProtocol等接口的功能与作用。同时,对HDFS中的异常体系、数据结构如Block、位于节点的数据信息LocatedBlock进行了深入解读。
摘要: hdfs包是hadoop HDFS的主要实现,首先分析下协议包,这个包定义了hdfs在不同节点中的通信协议,对于协议的分析有助于后面的章节对于hdfs服务端、客户端通信的深入理解,按照惯例,首先看一下这个包中几个孤立的类: ...
hdfs包是hadoop HDFS的主要实现,首先分析下协议包,这个包定义了hdfs在不同节点中的通信协议,对于协议的分析有助于后面的章节对于hdfs服务端、客户端通信的深入理解,按照惯例,首先看一下这个包中几个孤立的类:
1.png
DataTransferProtocol接口:这个接口定义了客户端和datanode传输数据所采用的流协议,包含如下属性:
2.png
特殊的说明下面几个:
OP_REPLACE_BLOCK从balancer节点发出到目标节点,包含了block id, 来源和 proxy的信息,OP_COPY_BLOCK从目的节点发送到proxy,只包含block id,对OP_COPY_BLOCK的回应要发送block内容,对OP_REPLACE_BLOCK的回应是要包括操作状态的信息,HEARTBEAT_SEQNO是心跳报文的序列号。剩下的常量从名字就可以得到具体意义了。
接口中的静态内部类PipelineAck对报文的返回做了初步解析,例如对心跳报文的解析,对返回状态的初步判断等等。

ClientDatanodeProtocol接口主要定义了用于数据块恢复的方法recoverBlock。

FSConstants主要定义了常量

AlreadyBeingCreatedException异常类定义了创建正在创建的文件时的异常

ClientProtocol接口定义了用户代码和namenode之前的交互,这个接口的方法将做详细分析,这也是api操作hdfs的基础方法:
getBlockLocations(String src, long offset,long length):获得指定的文件偏移块所在的datanode列表,按照和客户端距离排序

create(String src, FsPermission masked,String clientName, boolean overwrite,
short replication,long blockSize ):在namespace上创建一个新的文件实体,一旦创建了,这个文件对其他的client就可见了,还可以通过addBlock方法添加多个块。
append(String src, String clientName):在尾部追加文件
setReplication(String src, short replication) :对指定存在的文件设置冗余

setPermission(String src, FsPermission permission):对存在的文件设置权限
abandonBlock(Block b, String src, String holder):丢弃一个块
addBlock(String src, String clientName):追加块
complete(String src, String clientName):写块是否完成
reportBadBlocks(LocatedBlock[] blocks):报告坏块
rename(String src, String dst):重命名块
delete(String src):从fs上删除块
delete(String src, boolean recursive):递归删除
mkdirs(String src, FsPermission masked) :创建指定权限和名称的目录
getListing(String src) :获得指定目录下的文件列表
getStats():获得fs的统计信息
getDatanodeReport(FSConstants.DatanodeReportType type):返回此时datanodes的信息
getPreferredBlockSize(String filename):返回指定文件的块的大小
setSafeMode(FSConstants.SafeModeAction action):设置安全模式
saveNamespace():保存当前的namespace信息并重置edit log
metaSave(String filename):dump namenode的信息到指定文件
fsync(String src, String client):把指定的文件的元数据信息写到存储上

UnregisteredDatanodeException类定义了没有注册的DataNode发生的异常信息

BlockListAsLongs类的作用就是把块数组直接转换为一个long数组


下面是一个异常体系:
3.png
首先理解下quota:一个目录的quota可能是磁盘quota,也可能是namespace的quota
QuotaExceededException类说明了目录实际的quota和规定的quota发生了冲突
NSQuotaExceededException类就是指namespace中目录实际的quota和规定的quota发生了冲突
DSQuotaExceededException类就是指disk space中目录实际的quota和规定的quota发生了冲突


协议部分最后一个类层次关系:
4.png
DatanodeID类用主机、端口、存储id唯一标示了一个datanode,其中端口信息包括了info server端口和ipc server端口
Block类是hdfs特有的存储结构,用一个long类型来标示,名字以blk_为前缀,有id、长度和时间戳标示
LocatedBlock类顾名思义就是已分配空间,或者是已存储在namenode上的块,信息包括block的信息和DatanodeInfo[]
LocatedBlocks记录了文件长度、已分配的块列表和是否正在分配
DatanodeInfo类,在Datanode Protocol 和  Client Protocol中代表一个DataNode的状态信息,
这个类主要分析下属性:

5.png
AdminStates {NORMAL, 正常态 DECOMMISSION_INPROGRESS, 正在退役 DECOMMISSIONED;已经退役 }
capacity:数据结点的总容量
dfsUsed:已经使用的空间
remaining:未使用的空间
lastUpdate:最后的更新时间
xceiverCount:和该结点相连的活动的连接数量
location:默认机架
hostName:DataNode注册时的主机名
level:在node tree中的级别
parent:父节点


下面分析server.protocol自包中的协议
6.png

BlockMetaDataInfo继承了Block类,增加了lastScanTime属性,记录上次扫描的时间
NamespaceInfo类继承了StorageInfo类,除了记录基本的存储信息外还记录了build版本信息和更新的版本信息
InterDatanodeProtocol接口继承了带版本的协议接口,可以用新的标示和长度来更新数据块
DisallowedDatanodeException:datanode和namenode失去联系抛出的异常
NamenodeProtocol接口继承了带版本的协议接口,这个协议是namenode和secondary之间通信用的,提供了对EditLog和FsImage操作的方法
BlocksWithLocations提供了对于BlockLocations更为高效的序列化和反序列化的方法
DatanodeRegistration类提供了对datanode进行标示和验证的在namenode上所有的信息
DatanodeProtocol提供了datanode和namenode交互的协议
7.png
NOTIFY、DISK_ERROR、INVALID_BLOCK定义了三个错误码
DNA_UNKNOWN:不被识别的操作
DNA_TRANSFER:数据块复制操作
DNA_INVALIDATE:删除数据块
DNA_SHUTDOWN:关闭节点
DNA_REGISTER:重新注册
DNA_FINALIZE:回收之前的升级
DNA_RECOVERBLOCK:需要块恢复
主要的方法:
register(DatanodeRegistration registration):注册datanode
sendHeartbeat(DatanodeRegistration registration,long capacity,long dfsUsed, long remaining,int xmitsInProgress,int xceiverCount)发送心跳报文告诉namenode,datanode还活着,同时允许namenode在心跳报文中返回要执行的命令列表
blockReport(DatanodeRegistration registration,long[] blocks):告诉namenode节点中块情况
blockReceived(DatanodeRegistration registration,Block blocks[],String[] delHints) :告诉namenode最近收到的块信息,同时可以删除些冗余过量的块
errorReport():报告错误
commitBlockSynchronization:提交块同步



8.png
DatanodeCommand标示datanode执行的命令:注册、升级
BlockCommand:块命令 主要是给别的datanode发送块
UpgradeCommand:升级命令 执行升级有关的命令如开始升级、升级的状态等

原创作品 转载请标明出处 http://f.dataguru.cn/thread-19209-1-1.html
org.apache.hadoop.security.AccessControlException: Permission denied: user=prod_sec_strategy_tech, access=EXECUTE, inode="/user/prod_dd_sec_dw/dd_sec_dw/hive/dd_sec_dw/dim_trip_sec_account_logout_detail_df":dream:dd_sec_dw_0_dim_trip_sec_account_logout_detail_df:drwxr-x--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:606) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1866) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1884) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:720) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:114) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3402) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1257) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:972) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2716) at sun.reflect.GeneratedConstructorAccessor17.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2178) at org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1390) at org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1386) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1402) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1494) at org.apache.hadoop.hive.ql.exec.Utilities.isEmptyPath(Utilities.java:2699) at org.apache.hadoop.hive.ql.exec.Utilities.isEmptyPath(Utilities.java:2693) at org.apache.hadoop.hive.ql.exec.Utilities.getInputPaths(Utilities.java:3408) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:372) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:75) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=prod_sec_strategy_tech, access=EXECUTE, inode="/user/prod_dd_sec_dw/dd_sec_dw/hive/dd_sec_dw/dim_trip_sec_account_logout_detail_df":dream:dd_sec_dw_0_dim_trip_sec_account_logout_detail_df:drwxr-x--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:606) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1866) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1884) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:720) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:114) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3402) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1257) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:972) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2716) at org.apache.hadoop.ipc.Client.call(Client.java:1503) at org.apache.hadoop.ipc.Client.call(Client.java:1441) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:778) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:253) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101) at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2176) ... 13 more Job Submission failed with exception 'org.apache.hadoop.security.AccessControlException(Permission denied: user=prod_sec_strategy_tech, access=EXECUTE, inode="/user/prod_dd_sec_dw/dd_sec_dw/hive/dd_sec_dw/dim_trip_sec_account_logout_detail_df":dream:dd_sec_dw_0_dim_trip_sec_account_logout_detail_df:drwxr-x--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:606) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1866) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1884) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:720) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:114) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3402) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1257) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:972) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2716) )' Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> org.apache.hadoop.security.AccessControlException: Permission denied: user=prod_sec_strategy_tech, access=EXECUTE, inode="/user/prod_dd_sec_app/dd_sec_app/hive/dd_sec_app/dm_trip_sec_dri_ban_di":dream:dd_sec_app_0_dm_trip_sec_dri_ban_di:drwxr-x--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:606) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1866) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1884) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:720) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:114) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3402) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1257) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:972) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2716) at sun.reflect.GeneratedConstructorAccessor17.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2178) at org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1390) at org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1386) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1402) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1494) at org.apache.hadoop.hive.ql.exec.Utilities.isEmptyPath(Utilities.java:2699) at org.apache.hadoop.hive.ql.exec.Utilities.isEmptyPath(Utilities.java:2693) at org.apache.hadoop.hive.ql.exec.Utilities.getInputPaths(Utilities.java:3408) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:372) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:75) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=prod_sec_strategy_tech, access=EXECUTE, inode="/user/prod_dd_sec_app/dd_sec_app/hive/dd_sec_app/dm_trip_sec_dri_ban_di":dream:dd_sec_app_0_dm_trip_sec_dri_ban_di:drwxr-x--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:606) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1866) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1884) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:720) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:114) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3402) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1257) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:972) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2716) at org.apache.hadoop.ipc.Client.call(Client.java:1503) at org.apache.hadoop.ipc.Client.call(Client.java:1441) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:778) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:253) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101) at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2176) ... 13 more Job Submission failed with exception 'org.apache.hadoop.security.AccessControlException(Permission denied: user=prod_sec_strategy_tech, access=EXECUTE, inode="/user/prod_dd_sec_app/dd_sec_app/hive/dd_sec_app/dm_trip_sec_dri_ban_di":dream:dd_sec_app_0_dm_trip_sec_dri_ban_di:drwxr-x--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:606) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1866) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1884) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:720) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:114) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3402) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1257) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:972) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2716) )' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask java.lang.RuntimeException: Interuppted at org.apache.hadoop.hive.common.HiveInterruptUtils.checkInterrupted(HiveInterruptUtils.java:67) at org.apache.hadoop.hive.ql.exec.Utilities.getInputSummary(Utilities.java:2638) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.setNumberOfReducers(MapRedTask.java:409) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:93) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:75) Caused by: java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hive.common.HiveInterruptUtils.checkInterrupted(HiveInterruptUtils.java:63) ... 6 more java.lang.RuntimeException: Interuppted at org.apache.hadoop.hive.common.HiveInterruptUtils.checkInterrupted(HiveInterruptUtils.java:67) at org.apache.hadoop.hive.ql.exec.Utilities.getInputSummary(Utilities.java:2638) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.setNumberOfReducers(MapRedTask.java:409) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:93) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:75) Caused by: java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hive.common.HiveInterruptUtils.checkInterrupted(HiveInterruptUtils.java:63) ... 6 more
最新发布
11-08
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值