Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.security.AccessControlException:

博客指出Hadoop出现权限拒绝问题,给出四种解决方案。一是修改hadoop配置文件dfs.permissions为false,但生产环境不建议;二是将对应目录设为最大权限777;三是在win7环境变量或JVM变量添加HADOOP_USER_NAME,推荐此方案;四是修改当前系统帐号名称。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.security.AccessControlException: Permission denied: user=lenovo, access=WRITE, inode="":suh:supergroup:rwxr-xr-x

解决方案:

方案一:

到服务器上修改hadoop的配置文件:conf/hdfs-core.xml, 找到 dfs.permissions 的配置项 , 将value值改为 false:
<property>
<name>dfs.permissions</name>
<value>false</value>
<description>
If "true", enable permission checking in HDFS.
If "false", permission checking is turned off,
but all other behavior is unchanged.
Switching from one parameter value to the other does not change the mode,
owner or group of files or directories.
</description>
</property>
ps:修改完需要重启下hadoop的进程才能生效,生产环境出于安全考虑一般是不建议这么操作的。


方案二:
将对应的目录设置最大权限777
hadoop fs -chmod 777 /
ps:相对便捷的一个方案

方案三:
在win7系统的环境变量或java JVM变量里面添加 HADOOP_USER_NAME,值为运行HADOOP上的Linux的用户名。在Eclipse里面运行,修改完环境变量后,需要重启一下eclipse,不然可能不会生效。 
ps:推荐使用的方案

方案四:

将当前系统的帐号名称修改为运行HADOOP上的Linux的用户名。

SQL 错误 [1] [08S01]: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Permission denied: user=anonymous, access=EXECUTE, inode="/tmp":root:supergroup:drwx------ at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:350) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:311) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:238) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1753) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1737) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1682) at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:64) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1781) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:830) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:469) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:380) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:257) at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=anonymous, access=EXECUTE, inode="/tmp":root:supergroup:drwx------ at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:350) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:311) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:238) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1753) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1737) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1682) at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:64) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1781) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:830) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:469) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1987) at org.apache.hadoop.hdfs.DistributedFileSystem$35.doCall(DistributedFileSystem.java:1690) at org.apache.hadoop.hdfs.DistributedFileSystem$35.doCall(DistributedFileSystem.java:1687) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1687) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:720) at org.apache.hadoop.mapreduce.JobResourceUploader.mkdirs(JobResourceUploader.java:648) at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:167) at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:128) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:101) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:414) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1232) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:255) ... 11 more Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=anonymous, access=EXECUTE, inode="/tmp":root:supergroup:drwx------ at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:350) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:311) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:238) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1753) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1737) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1682) at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:64) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1781) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:830) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:469) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1507) at org.apache.hadoop.ipc.Client.call(Client.java:1453) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy30.setPermission(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:365) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.setPermission(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1985) ... 44 more
最新发布
06-18
<think>我们正在处理一个HiveSQL执行时的权限问题,具体错误是`Permissiondenied:user=anonymous,access=EXECUTE,inode="/tmp/..."`。根据引用[2]和[5],问题在于使用beeline连接时,默认使用了匿名用户(anonymous),而该用户对HDFS上的/tmp目录没有足够的权限(尤其是EXECUTE权限)。###问题原因分析1.**匿名用户权限不足**:当使用beeline连接Hive时,如果没有指定用户名(使用`-n`参数),则会使用匿名用户(anonymous)访问HDFS。而HDFS的/tmp目录通常权限严格,匿名用户没有执行权限。2.**Hive作业需要/tmp目录**:Hive在执行过程中会使用HDFS的/tmp目录来存储临时文件,需要写权限(有时还需要执行权限)。###解决方案####🔧方法1:指定认证用户(推荐)在beeline连接时使用`-n`参数指定有权限的用户(如HDFS超级用户或具有/tmp目录权限的用户):```bashbeeline-ujdbc:hive2://<hiveserver2_host>:10000-n<username>```例如,使用`root`用户:```bashbeeline-ujdbc:hive2://doitedu01:10000-nroot```>根据引用[5],这种方式直接解决了问题[^5]。####🔧方法2:调整/tmp目录权限(临时方案)如果不方便指定用户,可以临时放宽HDFS上/tmp目录的权限(注意:这可能有安全风险,仅用于测试环境):```bash#为/tmp目录添加其他用户(o)的执行权限hdfsdfs-chmodo+x/tmp#如果问题在/tmp的子目录(如/tmp/hadoop-yarn),则同样需要修改该子目录权限hdfsdfs-chmod-Ro+x/tmp/hadoop-yarn```>注意:`-R`选项是递归修改,但请谨慎使用,确保不会破坏其他权限设置。####🔧方法3:永久配置HDFS目录权限对于生产环境,建议为Hive创建一个专用用户(如`hive`),并确保该用户对以下目录有适当权限:1.**Hive仓库目录**:`/user/hive/warehouse````bashhdfsdfs-chown-Rhive:supergroup/user/hive/warehousehdfsdfs-chmod-R770/user/hive/warehouse```2.**HDFS临时目录**:`/tmp`(或特定子目录)```bash#创建Hive专用的临时目录并授权hdfsdfs-mkdir/tmp/hive-tmphdfsdfs-chownhive:supergroup/tmp/hive-tmphdfsdfs-chmod777/tmp/hive-tmp#或者770(如果只有hive用户使用)#在hive-site.xml中配置临时目录<property><name>hive.exec.scratchdir</name><value>/tmp/hive-tmp</value></property>```###验证步骤1.使用指定用户连接beeline:```bashbeeline-ujdbc:hive2://localhost:10000-nhive_user```2.执行之前失败的SQL语句,例如:```sqlINSERTINTOstudentVALUES(4,'sunny');```3.观察是否成功执行。###深度配置建议-**Kerberos环境**:如果集群启用了Kerberos认证,必须使用`kinit`获取票据后再连接beeline,并确保主体有相应权限。-**代理用户配置**:若通过应用程序访问,需在`hive-site.xml`中配置代理用户:```xml<property><name>hive.server2.enable.doAs</name><value>true</value></property>```###相关问题1.如何解决Hive表创建时的`MetaException:Permissiondenied`错误?2.HiveonTez/Spark执行引擎的权限配置有何不同?3.如何为Hive作业配置自定义的HDFS临时目录?4.Kerberos环境下beeline连接失败如何排查?5.HDFS目录权限的最佳实践是什么?[^1][^2][^4]>**关键提示**:生产环境中避免使用匿名用户操作Hive。始终通过`-n`指定有效用户,或配置代理用户机制[^5]。</think>### 解决 Hive SQL 执行时的 HDFS 权限问题:User=anonymous, Access=EXECUTE on /tmp #### 🔍 问题根源分析 错误核心是 **Hive 使用匿名用户(anonymous)访问 HDFS 的 /tmp 目录**,但该目录权限配置不允许匿名用户执行操作: ```bash Permission denied: user=anonymous, access=EXECUTE, inode="/tmp/hadoop-yarn" ``` 根本原因: 1. Beeline 连接未指定认证用户,默认使用 `anonymous` 2. HDFS 的 `/tmp` 目录权限严格(如 `drwx------` 700 权限) 3. Hive 执行时需要访问 `/tmp` 存储临时文件 --- #### ✅ 解决方案(按优先级排序) ##### 1. 指定认证用户连接 Beeline(推荐) **解决 90% 的匿名用户问题** [^5] ```bash # 使用有权限的用户连接(如 hadoop 或 hive) beeline -u jdbc:hive2://<hiveserver2_host>:10000 -n <username> # 示例:使用 root 用户 beeline -u jdbc:hive2://doitedu01:10000 -n root ``` ##### 2. 修改 /tmp 目录权限(临时方案) ```bash # 添加其他用户执行权限(+x) hdfs dfs -chmod o+x /tmp # 针对特定目录(如 hadoop-yarn) hdfs dfs -chmod -R 1777 /tmp/hadoop-yarn # 粘滞位确保安全 ``` > 权限说明: > - `1777` = `rwxrwxrwt`(所有用户可读写执行) > - 最后一位 `t` 是粘滞位,防止用户删除他人文件 ##### 3. 配置专用 Hive 临时目录(生产环境推荐) ```bash # 创建专用目录 hdfs dfs -mkdir -p /tmp/hive hdfs dfs -chmod 1777 /tmp/hive hdfs dfs -chown hive:hadoop /tmp/hive # 属主设为hive用户 # 在 hive-site.xml 中添加 <property> <name>hive.exec.scratchdir</name> <value>/tmp/hive</value> <!-- 自定义临时目录 --> </property> ``` ##### 4. 检查 HDFS 核心权限配置 在 `hdfs-site.xml` 确认: ```xml <property> <name>dfs.permissions.enabled</name> <value>false</value> <!-- 测试环境可临时关闭权限 --> </property> <property> <name>hadoop.http.staticuser.user</name> <value>hadoop</value> <!-- 默认静态用户 --> </property> ``` --- #### ⚠️ 操作后验证 ```bash # 检查目录权限 hdfs dfs -ls /tmp # 预期结果(示例): drwxrwxrwt - hadoop supergroup /tmp drwxrwxrwt - hive hadoop /tmp/hive # 测试执行(在 beeline 中) INSERT INTO student VALUES (4, 'sunny'); # 应成功执行 ``` --- #### 💡 根本原因深度解析 $$\small\text{权限问题 = 用户标识未传递 + HDFS 权限隔离}$$ 1. **用户标识传递断层** Beeline → HiveServer2 → Hadoop 过程中用户凭证丢失,退回到 `anonymous` 2. **HDFS 权限模型** - `/tmp` 默认权限 `700`(仅属主可访问) - EXECUTE 权限对目录意味着"进入目录"的权限 - 匿名用户不在 `supergroup` 中,触发访问拒绝 3. **Hive 执行依赖** ```mermaid graph LR A[Hive SQL] --> B[生成MR/Spark任务] B --> C[写临时文件到/tmp] C --> D[YARN调度] ``` --- #### 📌 预防措施 1. **始终指定连接用户** ```bash # 封装连接脚本 beeline -u jdbc:hive2://$HS2_HOST -n $USER -p $PASSWORD ``` 2. **定期清理临时目录** ```bash hdfs dfs -expunge # 强制清理回收站 ``` 3. **使用 Sentry/Ranger 统一权限管理** --- ### 相关问题 1. 如何配置 Kerberos 认证解决 Hive 权限问题? 2. Hive on Spark 与 Hive on MR 的权限配置差异? 3. 如何审计 Hive 中的用户操作权限? 4. HDFS 粘滞位(Sticky Bit)的工作原理是什么?[^2] 5. 生产环境中如何安全开放 /tmp 目录权限?[^1][^3] > **关键提示**:修改 HDFS 权限后需重启 HiveServer2 生效。生产环境优先采用方案1(指定用户)和方案3(专用目录)组合使用[^5]。
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值