在linux系统local模式和cluster模式运行spark任务时遇到权限方面的报错

重要说明****
这个问题记录的很零碎,不具有系统性。因为我的大数据系统搭建是从不同平台整合到一起,所以遇到了一些权限之类的其他问题,饶了很多弯路。最终/bigdata/project/spark-yarn/bin/spark-submit --master yarn /root/helloworld.py这个命令执行成功了。

重要说明****
以下是正文。

2024.11.5遇到在linux系统local模式和cluster模式运行spark任务时遇到权限方面的报错,通过chatgpt解决了问题,记录以下问答。
1.(base) [root@hadoop133 ~]# /bigdata/project/spark-yarn/bin/spark-submit --master local[*] /root/helloworld.py
2.24/11/05 16:45:17 INFO SparkContext: Running Spark version 3.3.1
3.24/11/05 16:45:17 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
4.24/11/05 16:45:17 INFO ResourceUtils: ==============================================================
5.24/11/05 16:45:17 INFO ResourceUtils: No custom resources configured for spark.driver.
6.24/11/05 16:45:17 INFO ResourceUtils: ==============================================================
7.24/11/05 16:45:17 INFO SparkContext: Submitted application: worldCount
8.24/11/05 16:45:17 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
9.24/11/05 16:45:17 INFO ResourceProfile: Limiting resource is cpu
10.24/11/05 16:45:17 INFO ResourceProfileManager: Added ResourceProfile id: 0
11.24/11/05 16:45:17 INFO SecurityManager: Changing view acls to: root
12.24/11/05 16:45:17 INFO SecurityManager: Changing modify acls to: root
13.24/11/05 16:45:17 INFO SecurityManager: Changing view acls groups to:
14.24/11/05 16:45:17 INFO SecurityManager: Changing modify acls groups to:
15.24/11/05 16:45:17 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
16.24/11/05 16:45:17 INFO Utils: Successfully started service ‘sparkDriver’ on port 35004.
17.24/11/05 16:45:17 INFO SparkEnv: Registering MapOutputTracker
18.24/11/05 16:45:17 INFO SparkEnv: Registering BlockManagerMaster
19.24/11/05 16:45:17 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20.24/11/05 16:45:17 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
21.24/11/05 16:45:17 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
22.24/11/05 16:45:17 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-44835c74-c41a-483d-99df-af6c8a89d895
23.24/11/05 16:45:17 INFO MemoryStore: MemoryStore started with capacity 366.3 MiB
24.24/11/05 16:45:17 INFO SparkEnv: Registering OutputCommitCoordinator
25.24/11/05 16:45:18 INFO Utils: Successfully started service ‘SparkUI’ on port 4040.
26.24/11/05 16:45:18 INFO Executor: Starting executor ID driver on host hadoop133
27.24/11/05 16:45:18 INFO Executor: Starting executor with user classpath (userClassPathFirst = false): ‘’
28.24/11/05 16:45:18 INFO Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService’ on port 35549.
29.24/11/05 16:45:18 INFO NettyBlockTransferService: Server created on hadoop133:35549
30.24/11/05 16:45:18 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
31.24/11/05 16:45:18 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, hadoop133, 35549, None)
32.24/11/05 16:45:18 INFO BlockManagerMasterEndpoint: Registering block manager hadoop133:35549 with 366.3 MiB RAM, BlockManagerId(driver, hadoop133, 35549, None)
33.24/11/05 16:45:18 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, hadoop133, 35549, None)
34.24/11/05 16:45:18 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, hadoop133, 35549, None)
35.24/11/05 16:45:19 ERROR SparkContext: Error initializing SparkContext.
36.java.lang.reflect.InvocationTargetException
37. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
38. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
39. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
40. at java.lang.reflect.Method.invoke(Method.java:498)
41. at org.apache.spark.deploy.SparkHadoopUtil . c r e a t e F i l e ( S p a r k H a d o o p U t i l . s c a l a : 541 ) 42. a t o r g . a p a c h e . s p a r k . d e p l o y . h i s t o r y . E v e n t L o g F i l e W r i t e r . i n i t L o g F i l e ( E v e n t L o g F i l e W r i t e r s . s c a l a : 98 ) 43. a t o r g . a p a c h e . s p a r k . d e p l o y . h i s t o r y . S i n g l e E v e n t L o g F i l e W r i t e r . s t a r t ( E v e n t L o g F i l e W r i t e r s . s c a l a : 223 ) 44. a t o r g . a p a c h e . s p a r k . s c h e d u l e r . E v e n t L o g g i n g L i s t e n e r . s t a r t ( E v e n t L o g g i n g L i s t e n e r . s c a l a : 83 ) 45. a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t . < i n i t > ( S p a r k C o n t e x t . s c a l a : 612 ) 46. a t o r g . a p a c h e . s p a r k . a p i . j a v a . J a v a S p a r k C o n t e x t . < i n i t > ( J a v a S p a r k C o n t e x t . s c a l a : 58 ) 47. a t s u n . r e f l e c t . N a t i v e C o n s t r u c t o r A c c e s s o r I m p l . n e w I n s t a n c e 0 ( N a t i v e M e t h o d ) 48. a t s u n . r e f l e c t . N a t i v e C o n s t r u c t o r A c c e s s o r I m p l . n e w I n s t a n c e ( N a t i v e C o n s t r u c t o r A c c e s s o r I m p l . j a v a : 62 ) 49. a t s u n . r e f l e c t . D e l e g a t i n g C o n s t r u c t o r A c c e s s o r I m p l . n e w I n s t a n c e ( D e l e g a t i n g C o n s t r u c t o r A c c e s s o r I m p l . j a v a : 45 ) 50. a t j a v a . l a n g . r e f l e c t . C o n s t r u c t o r . n e w I n s t a n c e ( C o n s t r u c t o r . j a v a : 423 ) 51. a t p y 4 j . r e f l e c t i o n . M e t h o d I n v o k e r . i n v o k e ( M e t h o d I n v o k e r . j a v a : 247 ) 52. a t p y 4 j . r e f l e c t i o n . R e f l e c t i o n E n g i n e . i n v o k e ( R e f l e c t i o n E n g i n e . j a v a : 357 ) 53. a t p y 4 j . G a t e w a y . i n v o k e ( G a t e w a y . j a v a : 238 ) 54. a t p y 4 j . c o m m a n d s . C o n s t r u c t o r C o m m a n d . i n v o k e C o n s t r u c t o r ( C o n s t r u c t o r C o m m a n d . j a v a : 80 ) 55. a t p y 4 j . c o m m a n d s . C o n s t r u c t o r C o m m a n d . e x e c u t e ( C o n s t r u c t o r C o m m a n d . j a v a : 69 ) 56. a t p y 4 j . C l i e n t S e r v e r C o n n e c t i o n . w a i t F o r C o m m a n d s ( C l i e n t S e r v e r C o n n e c t i o n . j a v a : 182 ) 57. a t p y 4 j . C l i e n t S e r v e r C o n n e c t i o n . r u n ( C l i e n t S e r v e r C o n n e c t i o n . j a v a : 106 ) 58. a t j a v a . l a n g . T h r e a d . r u n ( T h r e a d . j a v a : 748 ) 59. C a u s e d b y : o r g . a p a c h e . h a d o o p . s e c u r i t y . A c c e s s C o n t r o l E x c e p t i o n : P e r m i s s i o n d e n i e d : u s e r = r o o t , a c c e s s = W R I T E , i n o d e = " / d i r e c t o r y " : u s e r 001 : s u p e r g r o u p : d r w x r − x r − x 60. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S P e r m i s s i o n C h e c k e r . c h e c k ( F S P e r m i s s i o n C h e c k e r . j a v a : 504 ) 61. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S P e r m i s s i o n C h e c k e r . c h e c k P e r m i s s i o n ( F S P e r m i s s i o n C h e c k e r . j a v a : 336 ) 62. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S P e r m i s s i o n C h e c k e r . c h e c k P e r m i s s i o n ( F S P e r m i s s i o n C h e c k e r . j a v a : 242 ) 63. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r e c t o r y . c h e c k P e r m i s s i o n ( F S D i r e c t o r y . j a v a : 1939 ) 64. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r e c t o r y . c h e c k P e r m i s s i o n ( F S D i r e c t o r y . j a v a : 1923 ) 65. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r e c t o r y . c h e c k A n c e s t o r A c c e s s ( F S D i r e c t o r y . j a v a : 1882 ) 66. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r W r i t e F i l e O p . r e s o l v e P a t h F o r S t a r t F i l e ( F S D i r W r i t e F i l e O p . j a v a : 323 ) 67. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S N a m e s y s t e m . s t a r t F i l e I n t ( F S N a m e s y s t e m . j a v a : 2638 ) 68. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S N a m e s y s t e m . s t a r t F i l e ( F S N a m e s y s t e m . j a v a : 2578 ) 69. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . N a m e N o d e R p c S e r v e r . c r e a t e ( N a m e N o d e R p c S e r v e r . j a v a : 810 ) 70. a t o r g . a p a c h e . h a d o o p . h d f s . p r o t o c o l P B . C l i e n t N a m e n o d e P r o t o c o l S e r v e r S i d e T r a n s l a t o r P B . c r e a t e ( C l i e n t N a m e n o d e P r o t o c o l S e r v e r S i d e T r a n s l a t o r P B . j a v a : 494 ) 71. a t o r g . a p a c h e . h a d o o p . h d f s . p r o t o c o l . p r o t o . C l i e n t N a m e n o d e P r o t o c o l P r o t o s .createFile(SparkHadoopUtil.scala:541) 42. at org.apache.spark.deploy.history.EventLogFileWriter.initLogFile(EventLogFileWriters.scala:98) 43. at org.apache.spark.deploy.history.SingleEventLogFileWriter.start(EventLogFileWriters.scala:223) 44. at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:83) 45. at org.apache.spark.SparkContext.<init>(SparkContext.scala:612) 46. at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) 47. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 48. at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 49. at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 50. at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 51. at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) 52. at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 53. at py4j.Gateway.invoke(Gateway.java:238) 54. at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) 55. at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) 56. at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) 57. at py4j.ClientServerConnection.run(ClientServerConnection.java:106) 58. at java.lang.Thread.run(Thread.java:748) 59.Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/directory":user001:supergroup:drwxr-xr-x 60. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:504) 61. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:336) 62. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) 63. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939) 64. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1923) 65. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1882) 66. at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:323) 67. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2638) 68. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2578) 69. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:810) 70. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:494) 71. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos .createFile(SparkHadoopUtil.scala:541)42.atorg.apache.spark.deploy.history.EventLogFileWriter.initLogFile(EventLogFileWriters.scala:98)43.atorg.apache.spark.deploy.history.SingleEventLogFileWriter.start(EventLogFileWriters.scala:223)44.atorg.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:83)45.atorg.apache.spark.SparkContext.<init>(SparkContext.scala:612)46.atorg.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)47.atsun.reflect.NativeConstructorAccessorImpl.newInstance0(NativeMethod)48.atsun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)49.atsun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)50.atjava.lang.reflect.Constructor.newInstance(Constructor.java:423)51.atpy4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)52.atpy4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)53.atpy4j.Gateway.invoke(Gateway.java:238)54.atpy4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)55.atpy4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)56.atpy4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)57.atpy4j.ClientServerConnection.run(ClientServerConnection.java:106)58.atjava.lang.Thread.run(Thread.java:748)59.Causedby:org.apache.hadoop.security.AccessControlException:Permissiondenied:user=root,access=WRITE,inode="/directory":user001:supergroup:drwxrxrx60.atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:504)61.atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:336)62.atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242)63.atorg.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939)64.atorg.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1923)65.atorg.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1882)66.atorg.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:323)67.atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2638)68.atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2578)69.atorg.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:810)70.atorg.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:494)71.atorg.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtosClientNamenodeProtocol 2. c a l l B l o c k i n g M e t h o d ( C l i e n t N a m e n o d e P r o t o c o l P r o t o s . j a v a ) 72. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 72. at org.apache.hadoop.ipc.ProtobufRpcEngine2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)72.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 600 ) 73. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600) 73. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)73.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 568 ) 74. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568) 74. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)74.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 552 ) 75. a t o r g . a p a c h e . h a d o o p . i p c . R P C ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552) 75. at org.apache.hadoop.ipc.RPC ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)75.atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:1093)
76. at org.apache.hadoop.ipc.Server R p c C a l l . r u n ( S e r v e r . j a v a : 1035 ) 77. a t o r g . a p a c h e . h a d o o p . i p c . S e r v e r RpcCall.run(Server.java:1035) 77. at org.apache.hadoop.ipc.Server RpcCall.run(Server.java:1035)77.atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:963)
78. at java.security.AccessController.doPrivileged(Native Method)
79. at javax.security.auth.Subject.doAs(Subject.java:422)
80. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
81. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)
82.
83. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
84. at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
85. at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
86. at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
87. at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
88. at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
89. at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:286)
90. at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1271)
91. at org.apache.hadoop.hdfs.DistributedFileSystem$10.doCall(DistributedFileSystem.java:662)
92. at org.apache.hadoop.hdfs.DistributedFileSystem$10.doCall(DistributedFileSystem.java:659)
93. at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
94. at org.apache.hadoop.hdfs.DistributedFileSystem.createNonRecursive(DistributedFileSystem.java:680)
95. at org.apache.hadoop.hdfs.DistributedFileSystem.access 500 ( D i s t r i b u t e d F i l e S y s t e m . j a v a : 147 ) 96. a t o r g . a p a c h e . h a d o o p . h d f s . D i s t r i b u t e d F i l e S y s t e m 500(DistributedFileSystem.java:147) 96. at org.apache.hadoop.hdfs.DistributedFileSystem 500(DistributedFileSystem.java:147)96.atorg.apache.hadoop.hdfs.DistributedFileSystemHdfsDataOutputStreamBuilder.build(DistributedFileSystem.java:3557)
97. … 22 more
98.Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode=“/directory”:user001:supergroup:drwxr-xr-x
99. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:504)
100. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:336)
101. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242)
102. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939)
103. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1923)
104. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1882)
105. at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:323)
106. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2638)
107. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2578)
108. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:810)
109. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:494)
110. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol 2. c a l l B l o c k i n g M e t h o d ( C l i e n t N a m e n o d e P r o t o c o l P r o t o s . j a v a ) 111. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 111. at org.apache.hadoop.ipc.ProtobufRpcEngine2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)111.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 600 ) 112. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600) 112. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)112.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 568 ) 113. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568) 113. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)113.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 552 ) 114. a t o r g . a p a c h e . h a d o o p . i p c . R P C ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552) 114. at org.apache.hadoop.ipc.RPC ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)114.atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:1093)
115. at org.apache.hadoop.ipc.Server R p c C a l l . r u n ( S e r v e r . j a v a : 1035 ) 116. a t o r g . a p a c h e . h a d o o p . i p c . S e r v e r RpcCall.run(Server.java:1035) 116. at org.apache.hadoop.ipc.Server RpcCall.run(Server.java:1035)116.atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:963)
117. at java.security.AccessController.doPrivileged(Native Method)
118. at javax.security.auth.Subject.doAs(Subject.java:422)
119. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
120. at org.apache.hadoop.ipc.Server H a n d l e r . r u n ( S e r v e r . j a v a : 2966 ) 121.122. a t o r g . a p a c h e . h a d o o p . i p c . C l i e n t . g e t R p c R e s p o n s e ( C l i e n t . j a v a : 1612 ) 123. a t o r g . a p a c h e . h a d o o p . i p c . C l i e n t . c a l l ( C l i e n t . j a v a : 1558 ) 124. a t o r g . a p a c h e . h a d o o p . i p c . C l i e n t . c a l l ( C l i e n t . j a v a : 1455 ) 125. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 Handler.run(Server.java:2966) 121. 122. at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1612) 123. at org.apache.hadoop.ipc.Client.call(Client.java:1558) 124. at org.apache.hadoop.ipc.Client.call(Client.java:1455) 125. at org.apache.hadoop.ipc.ProtobufRpcEngine2 Handler.run(Server.java:2966)121.122.atorg.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1612)123.atorg.apache.hadoop.ipc.Client.call(Client.java:1558)124.atorg.apache.hadoop.ipc.Client.call(Client.java:1455)125.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Invoker.invoke(ProtobufRpcEngine2.java:242)
126. at org.apache.hadoop.ipc.ProtobufRpcEngine2 I n v o k e r . i n v o k e ( P r o t o b u f R p c E n g i n e 2. j a v a : 129 ) 127. a t c o m . s u n . p r o x y . Invoker.invoke(ProtobufRpcEngine2.java:129) 127. at com.sun.proxy. Invoker.invoke(ProtobufRpcEngine2.java:129)127.atcom.sun.proxy.Proxy30.create(Unknown Source)
128. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:382)
129. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
130. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
131. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
132. at java.lang.reflect.Method.invoke(Method.java:498)
133. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
134. at org.apache.hadoop.io.retry.RetryInvocationHandler C a l l . i n v o k e M e t h o d ( R e t r y I n v o c a t i o n H a n d l e r . j a v a : 165 ) 135. a t o r g . a p a c h e . h a d o o p . i o . r e t r y . R e t r y I n v o c a t i o n H a n d l e r Call.invokeMethod(RetryInvocationHandler.java:165) 135. at org.apache.hadoop.io.retry.RetryInvocationHandler Call.invokeMethod(RetryInvocationHandler.java:165)135.atorg.apache.hadoop.io.retry.RetryInvocationHandlerCall.invoke(RetryInvocationHandler.java:157)
136. at org.apache.hadoop.io.retry.RetryInvocationHandler C a l l . i n v o k e O n c e ( R e t r y I n v o c a t i o n H a n d l e r . j a v a : 95 ) 137. a t o r g . a p a c h e . h a d o o p . i o . r e t r y . R e t r y I n v o c a t i o n H a n d l e r . i n v o k e ( R e t r y I n v o c a t i o n H a n d l e r . j a v a : 359 ) 138. a t c o m . s u n . p r o x y . Call.invokeOnce(RetryInvocationHandler.java:95) 137. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) 138. at com.sun.proxy. Call.invokeOnce(RetryInvocationHandler.java:95)137.atorg.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)138.atcom.sun.proxy.Proxy31.create(Unknown Source)
139. at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:280)
140. … 29 more
141.24/11/05 16:45:19 INFO SparkUI: Stopped Spark web UI at http://hadoop133:4040
142.24/11/05 16:45:19 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
143.24/11/05 16:45:19 INFO MemoryStore: MemoryStore cleared
144.24/11/05 16:45:19 INFO BlockManager: BlockManager stopped
145.24/11/05 16:45:19 INFO BlockManagerMaster: BlockManagerMaster stopped
146.24/11/05 16:45:19 INFO OutputCommitCoordinatorKaTeX parse error: Expected group after '_' at position 367: …, line 197, in _̲_init__ 152. F…ClientNamenodeProtocol 2. c a l l B l o c k i n g M e t h o d ( C l i e n t N a m e n o d e P r o t o c o l P r o t o s . j a v a ) 170. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 170. at org.apache.hadoop.ipc.ProtobufRpcEngine2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)170.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 600 ) 171. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600) 171. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)171.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 568 ) 172. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568) 172. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)172.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 552 ) 173. a t o r g . a p a c h e . h a d o o p . i p c . R P C ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552) 173. at org.apache.hadoop.ipc.RPC ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)173.atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:1093)
174. at org.apache.hadoop.ipc.Server R p c C a l l . r u n ( S e r v e r . j a v a : 1035 ) 175. a t o r g . a p a c h e . h a d o o p . i p c . S e r v e r RpcCall.run(Server.java:1035) 175. at org.apache.hadoop.ipc.Server RpcCall.run(Server.java:1035)175.atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:963)
176. at java.security.AccessController.doPrivileged(Native Method)
177. at javax.security.auth.Subject.doAs(Subject.java:422)
178. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
179. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)
180.
181. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
182. at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
183. at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
184. at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
185. at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
186. at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
187. at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:286)
188. at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1271)
189. at org.apache.hadoop.hdfs.DistributedFileSystem$10.doCall(DistributedFileSystem.java:662)
190. at org.apache.hadoop.hdfs.DistributedFileSystem$10.doCall(DistributedFileSystem.java:659)
191. at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
192. at org.apache.hadoop.hdfs.DistributedFileSystem.createNonRecursive(DistributedFileSystem.java:680)
193. at org.apache.hadoop.hdfs.DistributedFileSystem.access 500 ( D i s t r i b u t e d F i l e S y s t e m . j a v a : 147 ) 194. a t o r g . a p a c h e . h a d o o p . h d f s . D i s t r i b u t e d F i l e S y s t e m 500(DistributedFileSystem.java:147) 194. at org.apache.hadoop.hdfs.DistributedFileSystem 500(DistributedFileSystem.java:147)194.atorg.apache.hadoop.hdfs.DistributedFileSystemHdfsDataOutputStreamBuilder.build(DistributedFileSystem.java:3557)
195. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
196. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
197. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
198. at java.lang.reflect.Method.invoke(Method.java:498)
199. at org.apache.spark.deploy.SparkHadoopUtil . c r e a t e F i l e ( S p a r k H a d o o p U t i l . s c a l a : 541 ) 200. a t o r g . a p a c h e . s p a r k . d e p l o y . h i s t o r y . E v e n t L o g F i l e W r i t e r . i n i t L o g F i l e ( E v e n t L o g F i l e W r i t e r s . s c a l a : 98 ) 201. a t o r g . a p a c h e . s p a r k . d e p l o y . h i s t o r y . S i n g l e E v e n t L o g F i l e W r i t e r . s t a r t ( E v e n t L o g F i l e W r i t e r s . s c a l a : 223 ) 202. a t o r g . a p a c h e . s p a r k . s c h e d u l e r . E v e n t L o g g i n g L i s t e n e r . s t a r t ( E v e n t L o g g i n g L i s t e n e r . s c a l a : 83 ) 203. a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t . < i n i t > ( S p a r k C o n t e x t . s c a l a : 612 ) 204. a t o r g . a p a c h e . s p a r k . a p i . j a v a . J a v a S p a r k C o n t e x t . < i n i t > ( J a v a S p a r k C o n t e x t . s c a l a : 58 ) 205. a t s u n . r e f l e c t . N a t i v e C o n s t r u c t o r A c c e s s o r I m p l . n e w I n s t a n c e 0 ( N a t i v e M e t h o d ) 206. a t s u n . r e f l e c t . N a t i v e C o n s t r u c t o r A c c e s s o r I m p l . n e w I n s t a n c e ( N a t i v e C o n s t r u c t o r A c c e s s o r I m p l . j a v a : 62 ) 207. a t s u n . r e f l e c t . D e l e g a t i n g C o n s t r u c t o r A c c e s s o r I m p l . n e w I n s t a n c e ( D e l e g a t i n g C o n s t r u c t o r A c c e s s o r I m p l . j a v a : 45 ) 208. a t j a v a . l a n g . r e f l e c t . C o n s t r u c t o r . n e w I n s t a n c e ( C o n s t r u c t o r . j a v a : 423 ) 209. a t p y 4 j . r e f l e c t i o n . M e t h o d I n v o k e r . i n v o k e ( M e t h o d I n v o k e r . j a v a : 247 ) 210. a t p y 4 j . r e f l e c t i o n . R e f l e c t i o n E n g i n e . i n v o k e ( R e f l e c t i o n E n g i n e . j a v a : 357 ) 211. a t p y 4 j . G a t e w a y . i n v o k e ( G a t e w a y . j a v a : 238 ) 212. a t p y 4 j . c o m m a n d s . C o n s t r u c t o r C o m m a n d . i n v o k e C o n s t r u c t o r ( C o n s t r u c t o r C o m m a n d . j a v a : 80 ) 213. a t p y 4 j . c o m m a n d s . C o n s t r u c t o r C o m m a n d . e x e c u t e ( C o n s t r u c t o r C o m m a n d . j a v a : 69 ) 214. a t p y 4 j . C l i e n t S e r v e r C o n n e c t i o n . w a i t F o r C o m m a n d s ( C l i e n t S e r v e r C o n n e c t i o n . j a v a : 182 ) 215. a t p y 4 j . C l i e n t S e r v e r C o n n e c t i o n . r u n ( C l i e n t S e r v e r C o n n e c t i o n . j a v a : 106 ) 216. a t j a v a . l a n g . T h r e a d . r u n ( T h r e a d . j a v a : 748 ) 217. C a u s e d b y : o r g . a p a c h e . h a d o o p . i p c . R e m o t e E x c e p t i o n ( o r g . a p a c h e . h a d o o p . s e c u r i t y . A c c e s s C o n t r o l E x c e p t i o n ) : P e r m i s s i o n d e n i e d : u s e r = r o o t , a c c e s s = W R I T E , i n o d e = " / d i r e c t o r y " : u s e r 001 : s u p e r g r o u p : d r w x r − x r − x 218. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S P e r m i s s i o n C h e c k e r . c h e c k ( F S P e r m i s s i o n C h e c k e r . j a v a : 504 ) 219. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S P e r m i s s i o n C h e c k e r . c h e c k P e r m i s s i o n ( F S P e r m i s s i o n C h e c k e r . j a v a : 336 ) 220. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S P e r m i s s i o n C h e c k e r . c h e c k P e r m i s s i o n ( F S P e r m i s s i o n C h e c k e r . j a v a : 242 ) 221. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r e c t o r y . c h e c k P e r m i s s i o n ( F S D i r e c t o r y . j a v a : 1939 ) 222. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r e c t o r y . c h e c k P e r m i s s i o n ( F S D i r e c t o r y . j a v a : 1923 ) 223. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r e c t o r y . c h e c k A n c e s t o r A c c e s s ( F S D i r e c t o r y . j a v a : 1882 ) 224. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r W r i t e F i l e O p . r e s o l v e P a t h F o r S t a r t F i l e ( F S D i r W r i t e F i l e O p . j a v a : 323 ) 225. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S N a m e s y s t e m . s t a r t F i l e I n t ( F S N a m e s y s t e m . j a v a : 2638 ) 226. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S N a m e s y s t e m . s t a r t F i l e ( F S N a m e s y s t e m . j a v a : 2578 ) 227. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . N a m e N o d e R p c S e r v e r . c r e a t e ( N a m e N o d e R p c S e r v e r . j a v a : 810 ) 228. a t o r g . a p a c h e . h a d o o p . h d f s . p r o t o c o l P B . C l i e n t N a m e n o d e P r o t o c o l S e r v e r S i d e T r a n s l a t o r P B . c r e a t e ( C l i e n t N a m e n o d e P r o t o c o l S e r v e r S i d e T r a n s l a t o r P B . j a v a : 494 ) 229. a t o r g . a p a c h e . h a d o o p . h d f s . p r o t o c o l . p r o t o . C l i e n t N a m e n o d e P r o t o c o l P r o t o s .createFile(SparkHadoopUtil.scala:541) 200. at org.apache.spark.deploy.history.EventLogFileWriter.initLogFile(EventLogFileWriters.scala:98) 201. at org.apache.spark.deploy.history.SingleEventLogFileWriter.start(EventLogFileWriters.scala:223) 202. at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:83) 203. at org.apache.spark.SparkContext.<init>(SparkContext.scala:612) 204. at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) 205. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 206. at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 207. at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 208. at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 209. at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) 210. at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 211. at py4j.Gateway.invoke(Gateway.java:238) 212. at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) 213. at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) 214. at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) 215. at py4j.ClientServerConnection.run(ClientServerConnection.java:106) 216. at java.lang.Thread.run(Thread.java:748) 217.Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/directory":user001:supergroup:drwxr-xr-x 218. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:504) 219. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:336) 220. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) 221. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939) 222. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1923) 223. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1882) 224. at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:323) 225. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2638) 226. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2578) 227. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:810) 228. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:494) 229. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos .createFile(SparkHadoopUtil.scala:541)200.atorg.apache.spark.deploy.history.EventLogFileWriter.initLogFile(EventLogFileWriters.scala:98)201.atorg.apache.spark.deploy.history.SingleEventLogFileWriter.start(EventLogFileWriters.scala:223)202.atorg.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:83)203.atorg.apache.spark.SparkContext.<init>(SparkContext.scala:612)204.atorg.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)205.atsun.reflect.NativeConstructorAccessorImpl.newInstance0(NativeMethod)206.atsun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)207.atsun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)208.atjava.lang.reflect.Constructor.newInstance(Constructor.java:423)209.atpy4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)210.atpy4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)211.atpy4j.Gateway.invoke(Gateway.java:238)212.atpy4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)213.atpy4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)214.atpy4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)215.atpy4j.ClientServerConnection.run(ClientServerConnection.java:106)216.atjava.lang.Thread.run(Thread.java:748)217.Causedby:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):Permissiondenied:user=root,access=WRITE,inode="/directory":user001:supergroup:drwxrxrx218.atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:504)219.atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:336)220.atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242)221.atorg.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939)222.atorg.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1923)223.atorg.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1882)224.atorg.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:323)225.atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2638)226.atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2578)227.atorg.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:810)228.atorg.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:494)229.atorg.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtosClientNamenodeProtocol 2. c a l l B l o c k i n g M e t h o d ( C l i e n t N a m e n o d e P r o t o c o l P r o t o s . j a v a ) 230. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 230. at org.apache.hadoop.ipc.ProtobufRpcEngine2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)230.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 600 ) 231. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600) 231. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)231.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 568 ) 232. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568) 232. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)232.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 552 ) 233. a t o r g . a p a c h e . h a d o o p . i p c . R P C ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552) 233. at org.apache.hadoop.ipc.RPC ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)233.atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:1093)
234. at org.apache.hadoop.ipc.Server R p c C a l l . r u n ( S e r v e r . j a v a : 1035 ) 235. a t o r g . a p a c h e . h a d o o p . i p c . S e r v e r RpcCall.run(Server.java:1035) 235. at org.apache.hadoop.ipc.Server RpcCall.run(Server.java:1035)235.atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:963)
236. at java.security.AccessController.doPrivileged(Native Method)
237. at javax.security.auth.Subject.doAs(Subject.java:422)
238. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
239. at org.apache.hadoop.ipc.Server H a n d l e r . r u n ( S e r v e r . j a v a : 2966 ) 240.241. a t o r g . a p a c h e . h a d o o p . i p c . C l i e n t . g e t R p c R e s p o n s e ( C l i e n t . j a v a : 1612 ) 242. a t o r g . a p a c h e . h a d o o p . i p c . C l i e n t . c a l l ( C l i e n t . j a v a : 1558 ) 243. a t o r g . a p a c h e . h a d o o p . i p c . C l i e n t . c a l l ( C l i e n t . j a v a : 1455 ) 244. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 Handler.run(Server.java:2966) 240. 241. at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1612) 242. at org.apache.hadoop.ipc.Client.call(Client.java:1558) 243. at org.apache.hadoop.ipc.Client.call(Client.java:1455) 244. at org.apache.hadoop.ipc.ProtobufRpcEngine2 Handler.run(Server.java:2966)240.241.atorg.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1612)242.atorg.apache.hadoop.ipc.Client.call(Client.java:1558)243.atorg.apache.hadoop.ipc.Client.call(Client.java:1455)244.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Invoker.invoke(ProtobufRpcEngine2.java:242)
245. at org.apache.hadoop.ipc.ProtobufRpcEngine2 I n v o k e r . i n v o k e ( P r o t o b u f R p c E n g i n e 2. j a v a : 129 ) 246. a t c o m . s u n . p r o x y . Invoker.invoke(ProtobufRpcEngine2.java:129) 246. at com.sun.proxy. Invoker.invoke(ProtobufRpcEngine2.java:129)246.atcom.sun.proxy.Proxy30.create(Unknown Source)
247. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:382)
248. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
249. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
250. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
251. at java.lang.reflect.Method.invoke(Method.java:498)
252. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
253. at org.apache.hadoop.io.retry.RetryInvocationHandler C a l l . i n v o k e M e t h o d ( R e t r y I n v o c a t i o n H a n d l e r . j a v a : 165 ) 254. a t o r g . a p a c h e . h a d o o p . i o . r e t r y . R e t r y I n v o c a t i o n H a n d l e r Call.invokeMethod(RetryInvocationHandler.java:165) 254. at org.apache.hadoop.io.retry.RetryInvocationHandler Call.invokeMethod(RetryInvocationHandler.java:165)254.atorg.apache.hadoop.io.retry.RetryInvocationHandlerCall.invoke(RetryInvocationHandler.java:157)
255. at org.apache.hadoop.io.retry.RetryInvocationHandler C a l l . i n v o k e O n c e ( R e t r y I n v o c a t i o n H a n d l e r . j a v a : 95 ) 256. a t o r g . a p a c h e . h a d o o p . i o . r e t r y . R e t r y I n v o c a t i o n H a n d l e r . i n v o k e ( R e t r y I n v o c a t i o n H a n d l e r . j a v a : 359 ) 257. a t c o m . s u n . p r o x y . Call.invokeOnce(RetryInvocationHandler.java:95) 256. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) 257. at com.sun.proxy. Call.invokeOnce(RetryInvocationHandler.java:95)256.atorg.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)257.atcom.sun.proxy.Proxy31.create(Unknown Source)
258. at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:280)
259. … 29 more
260.
261.24/11/05 16:45:19 INFO ShutdownHookManager: Shutdown hook called
262.24/11/05 16:45:19 INFO ShutdownHookManager: Deleting directory /tmp/spark-25496fa9-600b-43ec-bc94-709dadbc5ed3
263.24/11/05 16:45:19 INFO ShutdownHookManager: Deleting directory /tmp/spark-edc6da20-4add-4ea7-b18d-2bb8262a6dcc
264.中文回复,如何修复这个错误

Gpt-4-128k回复:
你在尝试使用 Spark 提交作业时遭遇了一个权限相关的错误。这个错误是由于 Hadoop 分布式文件系统(HDFS)上的权限设置导致的。具体来说,错误信息显示 Permission denied: user=root, access=WRITE, inode=“/directory”:user001:supergroup:drwxr-xr-x,意味着以 root 用户身份运行的 Spark 作业尝试写入目录 /directory,但这个目录的所有者是 user001,且其他用户没有写权限。

要解决这个问题,你可以采取以下几种方法:

更改目录权限: 你可以更改 /directory 目录的权限,允许 root 用户或其他用户组写入。这可以通过以下 Hadoop 文件系统命令完成:

hdfs dfs -chmod 775 /directory
这个命令将会更改目录权限,使得目录所有者(user001)和同组用户(supergroup)可以读写执行,其他用户可以读执行。

更改目录所有者: 如果你想让 root 用户成为这个目录的所有者,可以使用如下命令:

hdfs dfs -chown root:supergroup /directory
这样,root 用户就会成为 /directory 的所有者,可以进行读写操作。

使用其他用户身份运行 Spark: 你可以尝试以 user001 用户的身份运行 Spark 作业,这样自然就有写 /directory 的权限了。如果你使用的是命令行提交 Spark 作业,可以首先切换到对应用户:

su user001
然后再运行你的 spark-submit 命令。

配置 Hadoop 的 ACL(访问控制列表): 如果你的 Hadoop 集群启用了 ACL 功能,你还可以为特定用户添加写权限,而无需更改用户或组的权限:

hdfs dfs -setfacl -m user:root:rw- /directory
这条命令为 root 用户增加了对 /directory 的读写权限。

检查并修正脚本中的路径: 确认你的 Spark 作业中是否指定了正确的输出路径。有时因为路径设置错误(比如指向了不正确或不存在的 HDFS 路径)也会出现权限错误。

在进行这些改变后,请确保验证权限设置是否正确,并重新尝试运行你的 Spark 作业。此外,注意使用 HDFS 命令时应确保 Hadoop 环境变量和用户权限设置正确,避免因环境变量未正确设置而导致的问题。

2.(base) [root@hadoop133 ~]# /bigdata/project/spark-yarn/bin/spark-submit --master yarn /root/helloworld.py
3.24/11/05 16:54:50 INFO SparkContext: Running Spark version 3.3.1
4.24/11/05 16:54:50 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
5.24/11/05 16:54:50 INFO ResourceUtils: ==============================================================
6.24/11/05 16:54:50 INFO ResourceUtils: No custom resources configured for spark.driver.
7.24/11/05 16:54:50 INFO ResourceUtils: ==============================================================
8.24/11/05 16:54:50 INFO SparkContext: Submitted application: worldCount
9.24/11/05 16:54:50 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
10.24/11/05 16:54:50 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor
11.24/11/05 16:54:50 INFO ResourceProfileManager: Added ResourceProfile id: 0
12.24/11/05 16:54:50 INFO SecurityManager: Changing view acls to: root
13.24/11/05 16:54:50 INFO SecurityManager: Changing modify acls to: root
14.24/11/05 16:54:50 INFO SecurityManager: Changing view acls groups to:
15.24/11/05 16:54:50 INFO SecurityManager: Changing modify acls groups to:
16.24/11/05 16:54:50 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
17.24/11/05 16:54:50 INFO Utils: Successfully started service ‘sparkDriver’ on port 46098.
18.24/11/05 16:54:50 INFO SparkEnv: Registering MapOutputTracker
19.24/11/05 16:54:50 INFO SparkEnv: Registering BlockManagerMaster
20.24/11/05 16:54:50 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
21.24/11/05 16:54:50 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
22.24/11/05 16:54:51 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
23.24/11/05 16:54:51 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-a7c51b13-46e3-45d4-b99a-3cef2d4b3bb2
24.24/11/05 16:54:51 INFO MemoryStore: MemoryStore started with capacity 366.3 MiB
25.24/11/05 16:54:51 INFO SparkEnv: Registering OutputCommitCoordinator
26.24/11/05 16:54:51 INFO Utils: Successfully started service ‘SparkUI’ on port 4040.
27.24/11/05 16:54:51 INFO DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at hadoop134/192.168.88.134:8032
28.24/11/05 16:54:52 INFO Configuration: resource-types.xml not found
29.24/11/05 16:54:52 INFO ResourceUtils: Unable to find ‘resource-types.xml’.
30.24/11/05 16:54:52 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
31.24/11/05 16:54:52 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
32.24/11/05 16:54:52 INFO Client: Setting up container launch context for our AM
33.24/11/05 16:54:52 INFO Client: Setting up the launch environment for our AM container
34.24/11/05 16:54:52 INFO Client: Preparing resources for our AM container
35.24/11/05 16:54:52 ERROR SparkContext: Error initializing SparkContext.
36.org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode=“/user”:user001:supergroup:drwxr-xr-x
37. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:504)
38. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:336)
39. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:360)
40. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)
41. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939)
42. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1923)
43. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1882)
44. at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
45. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3410)
46. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1170)
47. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:740)
48. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol 2. c a l l B l o c k i n g M e t h o d ( C l i e n t N a m e n o d e P r o t o c o l P r o t o s . j a v a ) 49. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 49. at org.apache.hadoop.ipc.ProtobufRpcEngine2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)49.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 600 ) 50. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600) 50. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)50.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 568 ) 51. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568) 51. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)51.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 552 ) 52. a t o r g . a p a c h e . h a d o o p . i p c . R P C ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552) 52. at org.apache.hadoop.ipc.RPC ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)52.atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:1093)
53. at org.apache.hadoop.ipc.Server R p c C a l l . r u n ( S e r v e r . j a v a : 1035 ) 54. a t o r g . a p a c h e . h a d o o p . i p c . S e r v e r RpcCall.run(Server.java:1035) 54. at org.apache.hadoop.ipc.Server RpcCall.run(Server.java:1035)54.atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:963)
55. at java.security.AccessController.doPrivileged(Native Method)
56. at javax.security.auth.Subject.doAs(Subject.java:422)
57. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
58. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)
59.
60. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
61. at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
62. at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
63. at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
64. at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
65. at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
66. at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2509)
67. at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2483)
68. at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1485)
69. at org.apache.hadoop.hdfs.DistributedFileSystem 27. d o C a l l ( D i s t r i b u t e d F i l e S y s t e m . j a v a : 1482 ) 70. a t o r g . a p a c h e . h a d o o p . f s . F i l e S y s t e m L i n k R e s o l v e r . r e s o l v e ( F i l e S y s t e m L i n k R e s o l v e r . j a v a : 81 ) 71. a t o r g . a p a c h e . h a d o o p . h d f s . D i s t r i b u t e d F i l e S y s t e m . m k d i r s I n t e r n a l ( D i s t r i b u t e d F i l e S y s t e m . j a v a : 1499 ) 72. a t o r g . a p a c h e . h a d o o p . h d f s . D i s t r i b u t e d F i l e S y s t e m . m k d i r s ( D i s t r i b u t e d F i l e S y s t e m . j a v a : 1474 ) 73. a t o r g . a p a c h e . h a d o o p . f s . F i l e S y s t e m . m k d i r s ( F i l e S y s t e m . j a v a : 2388 ) 74. a t o r g . a p a c h e . h a d o o p . f s . F i l e S y s t e m . m k d i r s ( F i l e S y s t e m . j a v a : 750 ) 75. a t o r g . a p a c h e . s p a r k . d e p l o y . y a r n . C l i e n t . p r e p a r e L o c a l R e s o u r c e s ( C l i e n t . s c a l a : 507 ) 76. a t o r g . a p a c h e . s p a r k . d e p l o y . y a r n . C l i e n t . c r e a t e C o n t a i n e r L a u n c h C o n t e x t ( C l i e n t . s c a l a : 982 ) 77. a t o r g . a p a c h e . s p a r k . d e p l o y . y a r n . C l i e n t . s u b m i t A p p l i c a t i o n ( C l i e n t . s c a l a : 220 ) 78. a t o r g . a p a c h e . s p a r k . s c h e d u l e r . c l u s t e r . Y a r n C l i e n t S c h e d u l e r B a c k e n d . s t a r t ( Y a r n C l i e n t S c h e d u l e r B a c k e n d . s c a l a : 62 ) 79. a t o r g . a p a c h e . s p a r k . s c h e d u l e r . T a s k S c h e d u l e r I m p l . s t a r t ( T a s k S c h e d u l e r I m p l . s c a l a : 222 ) 80. a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t . < i n i t > ( S p a r k C o n t e x t . s c a l a : 585 ) 81. a t o r g . a p a c h e . s p a r k . a p i . j a v a . J a v a S p a r k C o n t e x t . < i n i t > ( J a v a S p a r k C o n t e x t . s c a l a : 58 ) 82. a t s u n . r e f l e c t . N a t i v e C o n s t r u c t o r A c c e s s o r I m p l . n e w I n s t a n c e 0 ( N a t i v e M e t h o d ) 83. a t s u n . r e f l e c t . N a t i v e C o n s t r u c t o r A c c e s s o r I m p l . n e w I n s t a n c e ( N a t i v e C o n s t r u c t o r A c c e s s o r I m p l . j a v a : 62 ) 84. a t s u n . r e f l e c t . D e l e g a t i n g C o n s t r u c t o r A c c e s s o r I m p l . n e w I n s t a n c e ( D e l e g a t i n g C o n s t r u c t o r A c c e s s o r I m p l . j a v a : 45 ) 85. a t j a v a . l a n g . r e f l e c t . C o n s t r u c t o r . n e w I n s t a n c e ( C o n s t r u c t o r . j a v a : 423 ) 86. a t p y 4 j . r e f l e c t i o n . M e t h o d I n v o k e r . i n v o k e ( M e t h o d I n v o k e r . j a v a : 247 ) 87. a t p y 4 j . r e f l e c t i o n . R e f l e c t i o n E n g i n e . i n v o k e ( R e f l e c t i o n E n g i n e . j a v a : 357 ) 88. a t p y 4 j . G a t e w a y . i n v o k e ( G a t e w a y . j a v a : 238 ) 89. a t p y 4 j . c o m m a n d s . C o n s t r u c t o r C o m m a n d . i n v o k e C o n s t r u c t o r ( C o n s t r u c t o r C o m m a n d . j a v a : 80 ) 90. a t p y 4 j . c o m m a n d s . C o n s t r u c t o r C o m m a n d . e x e c u t e ( C o n s t r u c t o r C o m m a n d . j a v a : 69 ) 91. a t p y 4 j . C l i e n t S e r v e r C o n n e c t i o n . w a i t F o r C o m m a n d s ( C l i e n t S e r v e r C o n n e c t i o n . j a v a : 182 ) 92. a t p y 4 j . C l i e n t S e r v e r C o n n e c t i o n . r u n ( C l i e n t S e r v e r C o n n e c t i o n . j a v a : 106 ) 93. a t j a v a . l a n g . T h r e a d . r u n ( T h r e a d . j a v a : 748 ) 94. C a u s e d b y : o r g . a p a c h e . h a d o o p . i p c . R e m o t e E x c e p t i o n ( o r g . a p a c h e . h a d o o p . s e c u r i t y . A c c e s s C o n t r o l E x c e p t i o n ) : P e r m i s s i o n d e n i e d : u s e r = r o o t , a c c e s s = W R I T E , i n o d e = " / u s e r " : u s e r 001 : s u p e r g r o u p : d r w x r − x r − x 95. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S P e r m i s s i o n C h e c k e r . c h e c k ( F S P e r m i s s i o n C h e c k e r . j a v a : 504 ) 96. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S P e r m i s s i o n C h e c k e r . c h e c k P e r m i s s i o n ( F S P e r m i s s i o n C h e c k e r . j a v a : 336 ) 97. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S P e r m i s s i o n C h e c k e r . c h e c k P e r m i s s i o n W i t h C o n t e x t ( F S P e r m i s s i o n C h e c k e r . j a v a : 360 ) 98. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S P e r m i s s i o n C h e c k e r . c h e c k P e r m i s s i o n ( F S P e r m i s s i o n C h e c k e r . j a v a : 240 ) 99. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r e c t o r y . c h e c k P e r m i s s i o n ( F S D i r e c t o r y . j a v a : 1939 ) 100. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r e c t o r y . c h e c k P e r m i s s i o n ( F S D i r e c t o r y . j a v a : 1923 ) 101. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r e c t o r y . c h e c k A n c e s t o r A c c e s s ( F S D i r e c t o r y . j a v a : 1882 ) 102. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r M k d i r O p . m k d i r s ( F S D i r M k d i r O p . j a v a : 60 ) 103. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S N a m e s y s t e m . m k d i r s ( F S N a m e s y s t e m . j a v a : 3410 ) 104. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . N a m e N o d e R p c S e r v e r . m k d i r s ( N a m e N o d e R p c S e r v e r . j a v a : 1170 ) 105. a t o r g . a p a c h e . h a d o o p . h d f s . p r o t o c o l P B . C l i e n t N a m e n o d e P r o t o c o l S e r v e r S i d e T r a n s l a t o r P B . m k d i r s ( C l i e n t N a m e n o d e P r o t o c o l S e r v e r S i d e T r a n s l a t o r P B . j a v a : 740 ) 106. a t o r g . a p a c h e . h a d o o p . h d f s . p r o t o c o l . p r o t o . C l i e n t N a m e n o d e P r o t o c o l P r o t o s 27.doCall(DistributedFileSystem.java:1482) 70. at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 71. at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1499) 72. at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1474) 73. at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2388) 74. at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:750) 75. at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:507) 76. at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:982) 77. at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:220) 78. at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62) 79. at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:222) 80. at org.apache.spark.SparkContext.<init>(SparkContext.scala:585) 81. at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) 82. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 83. at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 84. at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 85. at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 86. at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) 87. at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 88. at py4j.Gateway.invoke(Gateway.java:238) 89. at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) 90. at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) 91. at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) 92. at py4j.ClientServerConnection.run(ClientServerConnection.java:106) 93. at java.lang.Thread.run(Thread.java:748) 94.Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/user":user001:supergroup:drwxr-xr-x 95. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:504) 96. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:336) 97. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:360) 98. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240) 99. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939) 100. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1923) 101. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1882) 102. at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) 103. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3410) 104. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1170) 105. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:740) 106. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos 27.doCall(DistributedFileSystem.java:1482)70.atorg.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)71.atorg.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1499)72.atorg.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1474)73.atorg.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2388)74.atorg.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:750)75.atorg.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:507)76.atorg.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:982)77.atorg.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:220)78.atorg.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)79.atorg.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:222)80.atorg.apache.spark.SparkContext.<init>(SparkContext.scala:585)81.atorg.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)82.atsun.reflect.NativeConstructorAccessorImpl.newInstance0(NativeMethod)83.atsun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)84.atsun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)85.atjava.lang.reflect.Constructor.newInstance(Constructor.java:423)86.atpy4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)87.atpy4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)88.atpy4j.Gateway.invoke(Gateway.java:238)89.atpy4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)90.atpy4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)91.atpy4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)92.atpy4j.ClientServerConnection.run(ClientServerConnection.java:106)93.atjava.lang.Thread.run(Thread.java:748)94.Causedby:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):Permissiondenied:user=root,access=WRITE,inode="/user":user001:supergroup:drwxrxrx95.atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:504)96.atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:336)97.atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:360)98.atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)99.atorg.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939)100.atorg.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1923)101.atorg.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1882)102.atorg.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)103.atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3410)104.atorg.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1170)105.atorg.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:740)106.atorg.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtosClientNamenodeProtocol 2. c a l l B l o c k i n g M e t h o d ( C l i e n t N a m e n o d e P r o t o c o l P r o t o s . j a v a ) 107. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 107. at org.apache.hadoop.ipc.ProtobufRpcEngine2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)107.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 600 ) 108. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600) 108. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)108.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 568 ) 109. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568) 109. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)109.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 552 ) 110. a t o r g . a p a c h e . h a d o o p . i p c . R P C ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552) 110. at org.apache.hadoop.ipc.RPC ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)110.atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:1093)
111. at org.apache.hadoop.ipc.Server R p c C a l l . r u n ( S e r v e r . j a v a : 1035 ) 112. a t o r g . a p a c h e . h a d o o p . i p c . S e r v e r RpcCall.run(Server.java:1035) 112. at org.apache.hadoop.ipc.Server RpcCall.run(Server.java:1035)112.atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:963)
113. at java.security.AccessController.doPrivileged(Native Method)
114. at javax.security.auth.Subject.doAs(Subject.java:422)
115. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
116. at org.apache.hadoop.ipc.Server H a n d l e r . r u n ( S e r v e r . j a v a : 2966 ) 117.118. a t o r g . a p a c h e . h a d o o p . i p c . C l i e n t . g e t R p c R e s p o n s e ( C l i e n t . j a v a : 1612 ) 119. a t o r g . a p a c h e . h a d o o p . i p c . C l i e n t . c a l l ( C l i e n t . j a v a : 1558 ) 120. a t o r g . a p a c h e . h a d o o p . i p c . C l i e n t . c a l l ( C l i e n t . j a v a : 1455 ) 121. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 Handler.run(Server.java:2966) 117. 118. at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1612) 119. at org.apache.hadoop.ipc.Client.call(Client.java:1558) 120. at org.apache.hadoop.ipc.Client.call(Client.java:1455) 121. at org.apache.hadoop.ipc.ProtobufRpcEngine2 Handler.run(Server.java:2966)117.118.atorg.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1612)119.atorg.apache.hadoop.ipc.Client.call(Client.java:1558)120.atorg.apache.hadoop.ipc.Client.call(Client.java:1455)121.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Invoker.invoke(ProtobufRpcEngine2.java:242)
122. at org.apache.hadoop.ipc.ProtobufRpcEngine2 I n v o k e r . i n v o k e ( P r o t o b u f R p c E n g i n e 2. j a v a : 129 ) 123. a t c o m . s u n . p r o x y . Invoker.invoke(ProtobufRpcEngine2.java:129) 123. at com.sun.proxy. Invoker.invoke(ProtobufRpcEngine2.java:129)123.atcom.sun.proxy.Proxy33.mkdirs(Unknown Source)
124. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:674)
125. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
126. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
127. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
128. at java.lang.reflect.Method.invoke(Method.java:498)
129. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
130. at org.apache.hadoop.io.retry.RetryInvocationHandler C a l l . i n v o k e M e t h o d ( R e t r y I n v o c a t i o n H a n d l e r . j a v a : 165 ) 131. a t o r g . a p a c h e . h a d o o p . i o . r e t r y . R e t r y I n v o c a t i o n H a n d l e r Call.invokeMethod(RetryInvocationHandler.java:165) 131. at org.apache.hadoop.io.retry.RetryInvocationHandler Call.invokeMethod(RetryInvocationHandler.java:165)131.atorg.apache.hadoop.io.retry.RetryInvocationHandlerCall.invoke(RetryInvocationHandler.java:157)
132. at org.apache.hadoop.io.retry.RetryInvocationHandler C a l l . i n v o k e O n c e ( R e t r y I n v o c a t i o n H a n d l e r . j a v a : 95 ) 133. a t o r g . a p a c h e . h a d o o p . i o . r e t r y . R e t r y I n v o c a t i o n H a n d l e r . i n v o k e ( R e t r y I n v o c a t i o n H a n d l e r . j a v a : 359 ) 134. a t c o m . s u n . p r o x y . Call.invokeOnce(RetryInvocationHandler.java:95) 133. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) 134. at com.sun.proxy. Call.invokeOnce(RetryInvocationHandler.java:95)133.atorg.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)134.atcom.sun.proxy.Proxy34.mkdirs(Unknown Source)
135. at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2507)
136. … 27 more
137.24/11/05 16:54:52 INFO SparkUI: Stopped Spark web UI at http://hadoop133:4040
138.24/11/05 16:54:52 WARN YarnSchedulerBackend Y a r n S c h e d u l e r E n d p o i n t : A t t e m p t e d t o s e n d s h u t d o w n m e s s a g e b e f o r e t h e A M h a s r e g i s t e r e d ! 139.24 / 11 / 0516 : 54 : 52 W A R N Y a r n S c h e d u l e r B a c k e n d YarnSchedulerEndpoint: Attempted to send shutdown message before the AM has registered! 139.24/11/05 16:54:52 WARN YarnSchedulerBackend YarnSchedulerEndpoint:AttemptedtosendshutdownmessagebeforetheAMhasregistered!139.24/11/0516:54:52WARNYarnSchedulerBackendYarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
140.24/11/05 16:54:52 INFO YarnClientSchedulerBackend: Shutting down all executors
141.24/11/05 16:54:52 INFO YarnSchedulerBackend Y a r n D r i v e r E n d p o i n t : A s k i n g e a c h e x e c u t o r t o s h u t d o w n 142.24 / 11 / 0516 : 54 : 52 I N F O Y a r n C l i e n t S c h e d u l e r B a c k e n d : Y A R N c l i e n t s c h e d u l e r b a c k e n d S t o p p e d 143.24 / 11 / 0516 : 54 : 52 I N F O M a p O u t p u t T r a c k e r M a s t e r E n d p o i n t : M a p O u t p u t T r a c k e r M a s t e r E n d p o i n t s t o p p e d ! 144.24 / 11 / 0516 : 54 : 52 I N F O M e m o r y S t o r e : M e m o r y S t o r e c l e a r e d 145.24 / 11 / 0516 : 54 : 52 I N F O B l o c k M a n a g e r : B l o c k M a n a g e r s t o p p e d 146.24 / 11 / 0516 : 54 : 52 I N F O B l o c k M a n a g e r M a s t e r : B l o c k M a n a g e r M a s t e r s t o p p e d 147.24 / 11 / 0516 : 54 : 52 W A R N M e t r i c s S y s t e m : S t o p p i n g a M e t r i c s S y s t e m t h a t i s n o t r u n n i n g 148.24 / 11 / 0516 : 54 : 52 I N F O O u t p u t C o m m i t C o o r d i n a t o r YarnDriverEndpoint: Asking each executor to shut down 142.24/11/05 16:54:52 INFO YarnClientSchedulerBackend: YARN client scheduler backend Stopped 143.24/11/05 16:54:52 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 144.24/11/05 16:54:52 INFO MemoryStore: MemoryStore cleared 145.24/11/05 16:54:52 INFO BlockManager: BlockManager stopped 146.24/11/05 16:54:52 INFO BlockManagerMaster: BlockManagerMaster stopped 147.24/11/05 16:54:52 WARN MetricsSystem: Stopping a MetricsSystem that is not running 148.24/11/05 16:54:52 INFO OutputCommitCoordinator YarnDriverEndpoint:Askingeachexecutortoshutdown142.24/11/0516:54:52INFOYarnClientSchedulerBackend:YARNclientschedulerbackendStopped143.24/11/0516:54:52INFOMapOutputTrackerMasterEndpoint:MapOutputTrackerMasterEndpointstopped!144.24/11/0516:54:52INFOMemoryStore:MemoryStorecleared145.24/11/0516:54:52INFOBlockManager:BlockManagerstopped146.24/11/0516:54:52INFOBlockManagerMaster:BlockManagerMasterstopped147.24/11/0516:54:52WARNMetricsSystem:StoppingaMetricsSystemthatisnotrunning148.24/11/0516:54:52INFOOutputCommitCoordinatorOutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
149.24/11/05 16:54:52 INFO SparkContext: Successfully stopped SparkContext
150.Traceback (most recent call last):
151. File “/root/helloworld.py”, line 6, in
152. sc=SparkContext(conf=conf)
153. File “/bigdata/project/spark-yarn/python/lib/pyspark.zip/pyspark/context.py”, line 197, in init
154. File “/bigdata/project/spark-yarn/python/lib/pyspark.zip/pyspark/context.py”, line 282, in _do_init
155. File “/bigdata/project/spark-yarn/python/lib/pyspark.zip/pyspark/context.py”, line 402, in _initialize_context
156. File “/bigdata/project/spark-yarn/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py”, line 1585, in call
157. File “/bigdata/project/spark-yarn/python/lib/py4j-0.10.9.5-src.zip/py4j/protocol.py”, line 326, in get_return_value
158.py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
159.: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode=“/user”:user001:supergroup:drwxr-xr-x
160. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:504)
161. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:336)
162. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:360)
163. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)
164. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939)
165. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1923)
166. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1882)
167. at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
168. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3410)
169. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1170)
170. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:740)
171. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol 2. c a l l B l o c k i n g M e t h o d ( C l i e n t N a m e n o d e P r o t o c o l P r o t o s . j a v a ) 172. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 172. at org.apache.hadoop.ipc.ProtobufRpcEngine2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)172.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 600 ) 173. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600) 173. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)173.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 568 ) 174. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568) 174. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)174.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 552 ) 175. a t o r g . a p a c h e . h a d o o p . i p c . R P C ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552) 175. at org.apache.hadoop.ipc.RPC ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)175.atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:1093)
176. at org.apache.hadoop.ipc.Server R p c C a l l . r u n ( S e r v e r . j a v a : 1035 ) 177. a t o r g . a p a c h e . h a d o o p . i p c . S e r v e r RpcCall.run(Server.java:1035) 177. at org.apache.hadoop.ipc.Server RpcCall.run(Server.java:1035)177.atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:963)
178. at java.security.AccessController.doPrivileged(Native Method)
179. at javax.security.auth.Subject.doAs(Subject.java:422)
180. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
181. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)
182.
183. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
184. at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
185. at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
186. at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
187. at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
188. at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
189. at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2509)
190. at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2483)
191. at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1485)
192. at org.apache.hadoop.hdfs.DistributedFileSystem 27. d o C a l l ( D i s t r i b u t e d F i l e S y s t e m . j a v a : 1482 ) 193. a t o r g . a p a c h e . h a d o o p . f s . F i l e S y s t e m L i n k R e s o l v e r . r e s o l v e ( F i l e S y s t e m L i n k R e s o l v e r . j a v a : 81 ) 194. a t o r g . a p a c h e . h a d o o p . h d f s . D i s t r i b u t e d F i l e S y s t e m . m k d i r s I n t e r n a l ( D i s t r i b u t e d F i l e S y s t e m . j a v a : 1499 ) 195. a t o r g . a p a c h e . h a d o o p . h d f s . D i s t r i b u t e d F i l e S y s t e m . m k d i r s ( D i s t r i b u t e d F i l e S y s t e m . j a v a : 1474 ) 196. a t o r g . a p a c h e . h a d o o p . f s . F i l e S y s t e m . m k d i r s ( F i l e S y s t e m . j a v a : 2388 ) 197. a t o r g . a p a c h e . h a d o o p . f s . F i l e S y s t e m . m k d i r s ( F i l e S y s t e m . j a v a : 750 ) 198. a t o r g . a p a c h e . s p a r k . d e p l o y . y a r n . C l i e n t . p r e p a r e L o c a l R e s o u r c e s ( C l i e n t . s c a l a : 507 ) 199. a t o r g . a p a c h e . s p a r k . d e p l o y . y a r n . C l i e n t . c r e a t e C o n t a i n e r L a u n c h C o n t e x t ( C l i e n t . s c a l a : 982 ) 200. a t o r g . a p a c h e . s p a r k . d e p l o y . y a r n . C l i e n t . s u b m i t A p p l i c a t i o n ( C l i e n t . s c a l a : 220 ) 201. a t o r g . a p a c h e . s p a r k . s c h e d u l e r . c l u s t e r . Y a r n C l i e n t S c h e d u l e r B a c k e n d . s t a r t ( Y a r n C l i e n t S c h e d u l e r B a c k e n d . s c a l a : 62 ) 202. a t o r g . a p a c h e . s p a r k . s c h e d u l e r . T a s k S c h e d u l e r I m p l . s t a r t ( T a s k S c h e d u l e r I m p l . s c a l a : 222 ) 203. a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t . < i n i t > ( S p a r k C o n t e x t . s c a l a : 585 ) 204. a t o r g . a p a c h e . s p a r k . a p i . j a v a . J a v a S p a r k C o n t e x t . < i n i t > ( J a v a S p a r k C o n t e x t . s c a l a : 58 ) 205. a t s u n . r e f l e c t . N a t i v e C o n s t r u c t o r A c c e s s o r I m p l . n e w I n s t a n c e 0 ( N a t i v e M e t h o d ) 206. a t s u n . r e f l e c t . N a t i v e C o n s t r u c t o r A c c e s s o r I m p l . n e w I n s t a n c e ( N a t i v e C o n s t r u c t o r A c c e s s o r I m p l . j a v a : 62 ) 207. a t s u n . r e f l e c t . D e l e g a t i n g C o n s t r u c t o r A c c e s s o r I m p l . n e w I n s t a n c e ( D e l e g a t i n g C o n s t r u c t o r A c c e s s o r I m p l . j a v a : 45 ) 208. a t j a v a . l a n g . r e f l e c t . C o n s t r u c t o r . n e w I n s t a n c e ( C o n s t r u c t o r . j a v a : 423 ) 209. a t p y 4 j . r e f l e c t i o n . M e t h o d I n v o k e r . i n v o k e ( M e t h o d I n v o k e r . j a v a : 247 ) 210. a t p y 4 j . r e f l e c t i o n . R e f l e c t i o n E n g i n e . i n v o k e ( R e f l e c t i o n E n g i n e . j a v a : 357 ) 211. a t p y 4 j . G a t e w a y . i n v o k e ( G a t e w a y . j a v a : 238 ) 212. a t p y 4 j . c o m m a n d s . C o n s t r u c t o r C o m m a n d . i n v o k e C o n s t r u c t o r ( C o n s t r u c t o r C o m m a n d . j a v a : 80 ) 213. a t p y 4 j . c o m m a n d s . C o n s t r u c t o r C o m m a n d . e x e c u t e ( C o n s t r u c t o r C o m m a n d . j a v a : 69 ) 214. a t p y 4 j . C l i e n t S e r v e r C o n n e c t i o n . w a i t F o r C o m m a n d s ( C l i e n t S e r v e r C o n n e c t i o n . j a v a : 182 ) 215. a t p y 4 j . C l i e n t S e r v e r C o n n e c t i o n . r u n ( C l i e n t S e r v e r C o n n e c t i o n . j a v a : 106 ) 216. a t j a v a . l a n g . T h r e a d . r u n ( T h r e a d . j a v a : 748 ) 217. C a u s e d b y : o r g . a p a c h e . h a d o o p . i p c . R e m o t e E x c e p t i o n ( o r g . a p a c h e . h a d o o p . s e c u r i t y . A c c e s s C o n t r o l E x c e p t i o n ) : P e r m i s s i o n d e n i e d : u s e r = r o o t , a c c e s s = W R I T E , i n o d e = " / u s e r " : u s e r 001 : s u p e r g r o u p : d r w x r − x r − x 218. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S P e r m i s s i o n C h e c k e r . c h e c k ( F S P e r m i s s i o n C h e c k e r . j a v a : 504 ) 219. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S P e r m i s s i o n C h e c k e r . c h e c k P e r m i s s i o n ( F S P e r m i s s i o n C h e c k e r . j a v a : 336 ) 220. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S P e r m i s s i o n C h e c k e r . c h e c k P e r m i s s i o n W i t h C o n t e x t ( F S P e r m i s s i o n C h e c k e r . j a v a : 360 ) 221. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S P e r m i s s i o n C h e c k e r . c h e c k P e r m i s s i o n ( F S P e r m i s s i o n C h e c k e r . j a v a : 240 ) 222. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r e c t o r y . c h e c k P e r m i s s i o n ( F S D i r e c t o r y . j a v a : 1939 ) 223. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r e c t o r y . c h e c k P e r m i s s i o n ( F S D i r e c t o r y . j a v a : 1923 ) 224. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r e c t o r y . c h e c k A n c e s t o r A c c e s s ( F S D i r e c t o r y . j a v a : 1882 ) 225. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S D i r M k d i r O p . m k d i r s ( F S D i r M k d i r O p . j a v a : 60 ) 226. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S N a m e s y s t e m . m k d i r s ( F S N a m e s y s t e m . j a v a : 3410 ) 227. a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . N a m e N o d e R p c S e r v e r . m k d i r s ( N a m e N o d e R p c S e r v e r . j a v a : 1170 ) 228. a t o r g . a p a c h e . h a d o o p . h d f s . p r o t o c o l P B . C l i e n t N a m e n o d e P r o t o c o l S e r v e r S i d e T r a n s l a t o r P B . m k d i r s ( C l i e n t N a m e n o d e P r o t o c o l S e r v e r S i d e T r a n s l a t o r P B . j a v a : 740 ) 229. a t o r g . a p a c h e . h a d o o p . h d f s . p r o t o c o l . p r o t o . C l i e n t N a m e n o d e P r o t o c o l P r o t o s 27.doCall(DistributedFileSystem.java:1482) 193. at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 194. at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1499) 195. at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1474) 196. at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2388) 197. at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:750) 198. at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:507) 199. at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:982) 200. at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:220) 201. at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62) 202. at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:222) 203. at org.apache.spark.SparkContext.<init>(SparkContext.scala:585) 204. at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) 205. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 206. at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 207. at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 208. at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 209. at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) 210. at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 211. at py4j.Gateway.invoke(Gateway.java:238) 212. at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) 213. at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) 214. at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) 215. at py4j.ClientServerConnection.run(ClientServerConnection.java:106) 216. at java.lang.Thread.run(Thread.java:748) 217.Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/user":user001:supergroup:drwxr-xr-x 218. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:504) 219. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:336) 220. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:360) 221. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240) 222. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939) 223. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1923) 224. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1882) 225. at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) 226. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3410) 227. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1170) 228. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:740) 229. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos 27.doCall(DistributedFileSystem.java:1482)193.atorg.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)194.atorg.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1499)195.atorg.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1474)196.atorg.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2388)197.atorg.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:750)198.atorg.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:507)199.atorg.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:982)200.atorg.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:220)201.atorg.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)202.atorg.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:222)203.atorg.apache.spark.SparkContext.<init>(SparkContext.scala:585)204.atorg.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)205.atsun.reflect.NativeConstructorAccessorImpl.newInstance0(NativeMethod)206.atsun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)207.atsun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)208.atjava.lang.reflect.Constructor.newInstance(Constructor.java:423)209.atpy4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)210.atpy4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)211.atpy4j.Gateway.invoke(Gateway.java:238)212.atpy4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)213.atpy4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)214.atpy4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)215.atpy4j.ClientServerConnection.run(ClientServerConnection.java:106)216.atjava.lang.Thread.run(Thread.java:748)217.Causedby:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):Permissiondenied:user=root,access=WRITE,inode="/user":user001:supergroup:drwxrxrx218.atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:504)219.atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:336)220.atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:360)221.atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240)222.atorg.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939)223.atorg.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1923)224.atorg.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1882)225.atorg.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)226.atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3410)227.atorg.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1170)228.atorg.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:740)229.atorg.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtosClientNamenodeProtocol 2. c a l l B l o c k i n g M e t h o d ( C l i e n t N a m e n o d e P r o t o c o l P r o t o s . j a v a ) 230. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 230. at org.apache.hadoop.ipc.ProtobufRpcEngine2 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)230.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 600 ) 231. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600) 231. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)231.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 568 ) 232. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568) 232. at org.apache.hadoop.ipc.ProtobufRpcEngine2 ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)232.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Server P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e 2. j a v a : 552 ) 233. a t o r g . a p a c h e . h a d o o p . i p c . R P C ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552) 233. at org.apache.hadoop.ipc.RPC ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)233.atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:1093)
234. at org.apache.hadoop.ipc.Server R p c C a l l . r u n ( S e r v e r . j a v a : 1035 ) 235. a t o r g . a p a c h e . h a d o o p . i p c . S e r v e r RpcCall.run(Server.java:1035) 235. at org.apache.hadoop.ipc.Server RpcCall.run(Server.java:1035)235.atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:963)
236. at java.security.AccessController.doPrivileged(Native Method)
237. at javax.security.auth.Subject.doAs(Subject.java:422)
238. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
239. at org.apache.hadoop.ipc.Server H a n d l e r . r u n ( S e r v e r . j a v a : 2966 ) 240.241. a t o r g . a p a c h e . h a d o o p . i p c . C l i e n t . g e t R p c R e s p o n s e ( C l i e n t . j a v a : 1612 ) 242. a t o r g . a p a c h e . h a d o o p . i p c . C l i e n t . c a l l ( C l i e n t . j a v a : 1558 ) 243. a t o r g . a p a c h e . h a d o o p . i p c . C l i e n t . c a l l ( C l i e n t . j a v a : 1455 ) 244. a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2 Handler.run(Server.java:2966) 240. 241. at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1612) 242. at org.apache.hadoop.ipc.Client.call(Client.java:1558) 243. at org.apache.hadoop.ipc.Client.call(Client.java:1455) 244. at org.apache.hadoop.ipc.ProtobufRpcEngine2 Handler.run(Server.java:2966)240.241.atorg.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1612)242.atorg.apache.hadoop.ipc.Client.call(Client.java:1558)243.atorg.apache.hadoop.ipc.Client.call(Client.java:1455)244.atorg.apache.hadoop.ipc.ProtobufRpcEngine2Invoker.invoke(ProtobufRpcEngine2.java:242)
245. at org.apache.hadoop.ipc.ProtobufRpcEngine2 I n v o k e r . i n v o k e ( P r o t o b u f R p c E n g i n e 2. j a v a : 129 ) 246. a t c o m . s u n . p r o x y . Invoker.invoke(ProtobufRpcEngine2.java:129) 246. at com.sun.proxy. Invoker.invoke(ProtobufRpcEngine2.java:129)246.atcom.sun.proxy.Proxy33.mkdirs(Unknown Source)
247. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:674)
248. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
249. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
250. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
251. at java.lang.reflect.Method.invoke(Method.java:498)
252. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
253. at org.apache.hadoop.io.retry.RetryInvocationHandler C a l l . i n v o k e M e t h o d ( R e t r y I n v o c a t i o n H a n d l e r . j a v a : 165 ) 254. a t o r g . a p a c h e . h a d o o p . i o . r e t r y . R e t r y I n v o c a t i o n H a n d l e r Call.invokeMethod(RetryInvocationHandler.java:165) 254. at org.apache.hadoop.io.retry.RetryInvocationHandler Call.invokeMethod(RetryInvocationHandler.java:165)254.atorg.apache.hadoop.io.retry.RetryInvocationHandlerCall.invoke(RetryInvocationHandler.java:157)
255. at org.apache.hadoop.io.retry.RetryInvocationHandler C a l l . i n v o k e O n c e ( R e t r y I n v o c a t i o n H a n d l e r . j a v a : 95 ) 256. a t o r g . a p a c h e . h a d o o p . i o . r e t r y . R e t r y I n v o c a t i o n H a n d l e r . i n v o k e ( R e t r y I n v o c a t i o n H a n d l e r . j a v a : 359 ) 257. a t c o m . s u n . p r o x y . Call.invokeOnce(RetryInvocationHandler.java:95) 256. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) 257. at com.sun.proxy. Call.invokeOnce(RetryInvocationHandler.java:95)256.atorg.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)257.atcom.sun.proxy.Proxy34.mkdirs(Unknown Source)
258. at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2507)
259. … 27 more
260.
261.24/11/05 16:54:52 INFO ShutdownHookManager: Shutdown hook called
262.24/11/05 16:54:52 INFO ShutdownHookManager: Deleting directory /tmp/spark-b0f5eb5c-c881-4925-8ba9-b293c2a26f5b
263.24/11/05 16:54:52 INFO ShutdownHookManager: Deleting directory /tmp/spark-bdfaeee0-4a44-4d11-84ea-bcb4016eebb6
264.当我用local模式现在按照以上hdfs dfs -chown root:supergroup /directory这个方法设置后就可以正常运行了,但是用集群模型跑的时候报了如上错误,如何解决

Gpt-4-128k回复:【最终还是有错误的,找不到python3解释器这个问题解决了,看下面3的说明】
当您在集群模式下运行Spark作业时遇到权限问题,尤其是与HDFS交互时出现的权限拒绝错误,通常意味着您的用户(此例中为 root)未被授权进行某些操作。在这种情况下,错误明确指出用户 root 没有在 /user 目录下写入的权限。这是因为HDFS的目录权限问题,根用户在Hadoop集群上并非自动拥有所有权限,需进行适当的权限配置。

要解决您面对的问题,您可以采取以下几个步骤:

  1. 确认目录权限
    首先,您需要确认 /user 目录的当前权限和所有权。可以使用以下命令查看:

hdfs dfs -ls /user
复制
这会显示/user目录的权限和所有者。如果您发现root用户没有适当的权限,您将需要修改它。

  1. 修改HDFS上的权限
    如果允许root用户作为HDFS的管理员
    如果您的系统策略允许,可将root用户设置为该目录的所有者或所在组的成员,并确保这个目录赋予了合适的权限。您可以使用以下两种方法之一:

更改所有者: 使用 hdfs dfs -chown命令更改文件或目录的所有者。例如,如果要将 /user 目录的所有权转移给root用户和supergroup组:

hdfs dfs -chown root:supergroup /user
复制
更改权限: 使用 hdfs dfs -chmod 命令调整权限,允许必要的操作。例如,赋予 /user 目录写权限(如果您确实希望如此做):

hdfs dfs -chmod 775 /user
复制
如果不允许修改所有者或权限
如果出于安全或政策原因,不允许修改上述设置,您可以尝试以下替代方法:

使用其他有适当权限的用户: 在Spark作业配置中指定另一个具有必要HDFS权限的用户。这可以通过修改作业提交中的用户参数来完成,或配置YARN以使用不同的用户进行作业运行。
3. 检查Spark和YARN配置
确保您的Spark作业在使用正确用户账户运行,这可能涉及到您提交作业时的用户参数设置,或是YARN队列的权限设置。

  1. 日志和错误跟进
    在修改上述设置后,重试作业提交并仔细检查日志输出,看看是否还存在权限或其他配置错误。如果错误持续存在,可以考虑进一步审查相关的Hadoop和Spark配置文件,如 core-site.xml, hdfs-site.xml, yarn-site.xml, 和 spark-defaults.conf。

通过这些步骤,您应该能够解决遇到的权限问题,从而让您的Spark作业能够在集群模式下成功运行。

3.第二个问题的回复中表明了,虽然任务跑下去了,但是由于执行器excutor的python环境是2.7,和代码要求的python3环境不匹配,所以任务跑失败了。解决方法如下:
一共有hadoop 133 134 135三个虚拟机,每台虚拟机设置了自己的user001来跑spark任务,所以每台虚拟机的user001都要安装python3的相应环境,本次安装的是3.8.8版本python。
(1)把python3.8.8的安装包下载到三台虚拟机的/home/user001/目录下,这样才没有后面设置python3的权限问题
下载好安装包之后,安装python3.8.8在/home/user001/目录下

核心命令如下:
步骤 1:安装依赖项
首先,确保您已经安装了所有必要的编译工具和库:

bash
深色版本
sudo yum groupinstall -y “Development Tools”
sudo yum install -y zlib-devel bzip2 bzip2-devel readline-devel sqlite sqlite-devel openssl-devel tk-devel libffi-devel
步骤 2:解压并编译安装 Python 3.8.8
切换到根目录:
bash
深色版本
cd /
解压安装包:
bash
深色版本
tar xzf Python-3.8.8.tgz
进入解压后的目录:
bash
深色版本
cd Python-3.8.8
配置编译选项:
bash
深色版本
./configure --prefix=/usr/local --enable-shared LDFLAGS=“-Wl,-rpath /usr/local/lib”
编译并安装:
bash
深色版本
make
sudo make altinstall
使用 make altinstall 而不是 make install,以避免覆盖系统默认的 Python 2.7 版本。
步骤 3:验证安装
确保 Python 3.8.8 已经成功安装:

bash
深色版本
python3.8 --version

(2)安装好python3.8.8之后,需要把python3的映射修改以下,改到安装好的3.8.8下面
核心命令如下:
[user001@hadoop135 Python-3.8.8]$ sudo rm -f /usr/bin/python3
[user001@hadoop135 Python-3.8.8]$ sudo ln -s /home/user001/Python-3.8.8/python /usr/bin/python3
[user001@hadoop135 Python-3.8.8]$ ls -l /usr/bin/python3
lrwxrwxrwx. 1 root root 33 11月 6 14:07 /usr/bin/python3 -> /home/user001/Python-3.8.8/python
[user001@hadoop135 Python-3.8.8]$ python3 --version
Python 3.8.8

这样问题2中提的点就解决了,最终程序跑通了。
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值