Caused by: java.io.IOException: Permission denied at java.io.UnixFileSystem.createFileExclus

本文记录了解决Spark在启动过程中遇到的权限问题的过程。通过调整Hadoop配置和文件系统权限,最终成功启动Spark并创建了SparkContext。适用于Spark与Hadoop环境配置及故障排查。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

[hjr@master bin]$ ./spark-shell
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
16/03/31 13:56:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/03/31 13:56:13 INFO spark.SecurityManager: Changing view acls to: hjr
16/03/31 13:56:13 INFO spark.SecurityManager: Changing modify acls to: hjr
16/03/31 13:56:13 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hjr); users with modify permissions: Set(hjr)
16/03/31 13:56:13 INFO spark.HttpServer: Starting HTTP Server
16/03/31 13:56:13 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/03/31 13:56:13 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:39368
16/03/31 13:56:13 INFO util.Utils: Successfully started service 'HTTP class server' on port 39368.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.4.1
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_80)
Type in expressions to have them evaluated.
Type :help for more information.
16/03/31 13:56:22 INFO spark.SparkContext: Running Spark version 1.4.1
16/03/31 13:56:22 WARN spark.SparkConf: 
SPARK_CLASSPATH was detected (set to ':/home/spark/spark-1.4.1-bin-hadoop2.4/lib/mysql-connector-java-5.1.26-bin.jar').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with --driver-class-path to augment the driver classpath
 - spark.executor.extraClassPath to augment the executor classpath

16/03/31 13:56:22 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to ':/home/spark/spark-1.4.1-bin-hadoop2.4/lib/mysql-connector-java-5.1.26-bin.jar' as a work-around.
16/03/31 13:56:22 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to ':/home/spark/spark-1.4.1-bin-hadoop2.4/lib/mysql-connector-java-5.1.26-bin.jar' as a work-around.
16/03/31 13:56:22 INFO spark.SecurityManager: Changing view acls to: hjr
16/03/31 13:56:22 INFO spark.SecurityManager: Changing modify acls to: hjr
16/03/31 13:56:22 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hjr); users with modify permissions: Set(hjr)
16/03/31 13:56:23 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/03/31 13:56:23 INFO Remoting: Starting remoting
16/03/31 13:56:23 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.13.43:40169]
16/03/31 13:56:23 INFO util.Utils: Successfully started service 'sparkDriver' on port 40169.
16/03/31 13:56:23 INFO spark.SparkEnv: Registering MapOutputTracker
16/03/31 13:56:23 INFO spark.SparkEnv: Registering BlockManagerMaster
16/03/31 13:56:23 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-9e5228e8-d1d9-49ac-b187-0da05e0359e4/blockmgr-a77c56e4-b68d-42fa-95f9-c0e783daf7d0
16/03/31 13:56:23 INFO storage.MemoryStore: MemoryStore started with capacity 1069.1 MB
16/03/31 13:56:24 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-9e5228e8-d1d9-49ac-b187-0da05e0359e4/httpd-f9ee9698-2a94-4b33-b660-64cfa3e2e4f2
16/03/31 13:56:24 INFO spark.HttpServer: Starting HTTP Server
16/03/31 13:56:24 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/03/31 13:56:24 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:52432
16/03/31 13:56:24 INFO util.Utils: Successfully started service 'HTTP file server' on port 52432.
16/03/31 13:56:24 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/03/31 13:56:24 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/03/31 13:56:24 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/03/31 13:56:24 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/03/31 13:56:24 INFO ui.SparkUI: Started SparkUI at http://192.168.13.43:4040
16/03/31 13:56:25 INFO client.AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@master.casking.com:7077/user/Master...
16/03/31 13:56:25 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160331135625-0015
16/03/31 13:56:25 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 54303.
16/03/31 13:56:25 INFO netty.NettyBlockTransferService: Server created on 54303
16/03/31 13:56:25 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/03/31 13:56:25 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.13.43:54303 with 1069.1 MB RAM, BlockManagerId(driver, 192.168.13.43, 54303)
16/03/31 13:56:25 INFO storage.BlockManagerMaster: Registered BlockManager
16/03/31 13:56:26 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/03/31 13:56:26 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.
16/03/31 13:56:28 INFO hive.HiveContext: Initializing execution hive, version 0.13.1
java.lang.RuntimeException: java.io.IOException: Permission denied
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:330)
        at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:116)
        at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:163)
        at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:161)
        at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:168)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
        at $iwC$$iwC.<init>(<console>:9)
        at $iwC.<init>(<console>:18)
        at <init>(<console>:20)
        at .<init>(<console>:24)
        at .<clinit>(<console>)
        at .<init>(<console>:7)
        at .<clinit>(<console>)
        at $print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
        at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
        at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
        at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
        at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
        at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
        at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:130)
        at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:122)
        at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
        at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:122)
        at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
        at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:157)
        at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:106)
        at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
        at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
        at org.apache.spark.repl.Main$.main(Main.scala:31)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: Permission denied
        at java.io.UnixFileSystem.createFileExclusively(Native Method)
        at java.io.File.createTempFile(File.java:2001)
        at org.apache.hadoop.hive.ql.session.SessionState.createTempFile(SessionState.java:432)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:328)
        ... 56 more

<console>:10: error: not found: value sqlContext
       import sqlContext.implicits._
              ^
<console>:10: error: not found: value sqlContext
       import sqlContext.sql
              ^

scala> 

查看Hadoop的core-site.xml文件的hadoop.tmp.dir的配置路径,并
修改其访问权限:

 <property>
       <name>hadoop.tmp.dir</name>
       <value>file:/home/spark/hadoop-2.4.0/temp</value>
      <description>Abase for other temporary directories.    
      </description>
 </property>

我的spark和hadoop安装配置都是使用root用户,所以,将新建的用户hjr新加到root组里面去,然后再修改hadoop.tmp.dir路径的权限:

hadoop/spark集群安装配置在root组:

[root@master spark-1.4.1-bin-hadoop2.4]# id
uid=0(root) gid=0(root) groups=0(root)
[root@master spark-1.4.1-bin-hadoop2.4]# 

查看hjr用户所属组:

[root@master spark-1.4.1-bin-hadoop2.4]# su hjr
[hjr@master spark-1.4.1-bin-hadoop2.4]$ id
uid=1012(hjr) gid=1012(hjr) groups=1012(hjr)
[hjr@master spark-1.4.1-bin-hadoop2.4]$ 

将用户hjr添加到root组里面去:

[hjr@master spark-1.4.1-bin-hadoop2.4]$ su root
Password: 
[root@master spark-1.4.1-bin-hadoop2.4]# usermod -g root hjr
[root@master spark-1.4.1-bin-hadoop2.4]# su hjr
[hjr@master spark-1.4.1-bin-hadoop2.4]$ id
uid=1012(hjr) gid=0(root) groups=0(root)
[hjr@master spark-1.4.1-bin-hadoop2.4]$ 

退出spark-shell来,发现问题了:

根目录下的/tmp文件夹的访问权限没有提上来:

scala> exit
warning: there were 1 deprecation warning(s); re-run with -deprecation for details
16/04/06 14:12:40 INFO spark.SparkContext: Invoking stop() from shutdown hook
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/04/06 14:12:40 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/04/06 14:12:40 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.13.43:4040
16/04/06 14:12:40 INFO scheduler.DAGScheduler: Stopping DAGScheduler
16/04/06 14:12:40 INFO cluster.SparkDeploySchedulerBackend: Shutting down all executors
16/04/06 14:12:40 INFO cluster.SparkDeploySchedulerBackend: Asking each executor to shut down
16/04/06 14:12:40 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/04/06 14:12:40 INFO util.Utils: path = /tmp/spark-4321496c-dcdc-4d6f-abe2-576d81b4a2fb/blockmgr-44be0eb3-a333-4f63-827d-05c926a3d3d1, already present as root for deletion.
16/04/06 14:12:40 INFO storage.MemoryStore: MemoryStore cleared
16/04/06 14:12:40 INFO storage.BlockManager: BlockManager stopped
16/04/06 14:12:40 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
16/04/06 14:12:40 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/04/06 14:12:40 INFO spark.SparkContext: Successfully stopped SparkContext
16/04/06 14:12:40 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/04/06 14:12:40 INFO util.Utils: Shutdown hook called
16/04/06 14:12:40 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/04/06 14:12:40 INFO util.Utils: Deleting directory /tmp/spark-4321496c-dcdc-4d6f-abe2-576d81b4a2fb
16/04/06 14:12:40 INFO util.Utils: Deleting directory /tmp/spark-b177a1a8-0c25-4475-8f06-dddb3a49fe08
16/04/06 14:12:40 INFO util.Utils: Deleting directory /tmp/spark-f731a0a4-c5cf-4db5-baab-ec3a4d28cf26

切换到root用户下,将/tmp文件夹权限提上来就行:

chmod -R 777 /tmp

再次执行,妥妥的。。。。。。

[hjr@master bin]$ ./spark-shell
16/04/06 15:07:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/04/06 15:07:39 INFO spark.SecurityManager: Changing view acls to: hjr
16/04/06 15:07:39 INFO spark.SecurityManager: Changing modify acls to: hjr
16/04/06 15:07:39 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hjr); users with modify permissions: Set(hjr)
16/04/06 15:07:40 INFO spark.HttpServer: Starting HTTP Server
16/04/06 15:07:40 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/04/06 15:07:40 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:51949
16/04/06 15:07:40 INFO util.Utils: Successfully started service 'HTTP class server' on port 51949.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.4.1
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_80)
Type in expressions to have them evaluated.
Type :help for more information.
16/04/06 15:07:49 INFO spark.SparkContext: Running Spark version 1.4.1
16/04/06 15:07:49 WARN spark.SparkConf: 
SPARK_CLASSPATH was detected (set to ':/home/spark/spark-1.4.1-bin-hadoop2.4/lib/mysql-connector-java-5.1.26-bin.jar').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with --driver-class-path to augment the driver classpath
 - spark.executor.extraClassPath to augment the executor classpath

16/04/06 15:07:49 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to ':/home/spark/spark-1.4.1-bin-hadoop2.4/lib/mysql-connector-java-5.1.26-bin.jar' as a work-around.
16/04/06 15:07:49 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to ':/home/spark/spark-1.4.1-bin-hadoop2.4/lib/mysql-connector-java-5.1.26-bin.jar' as a work-around.
16/04/06 15:07:49 INFO spark.SecurityManager: Changing view acls to: hjr
16/04/06 15:07:49 INFO spark.SecurityManager: Changing modify acls to: hjr
16/04/06 15:07:49 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hjr); users with modify permissions: Set(hjr)
16/04/06 15:07:50 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/04/06 15:07:50 INFO Remoting: Starting remoting
16/04/06 15:07:50 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.13.43:59810]
16/04/06 15:07:50 INFO util.Utils: Successfully started service 'sparkDriver' on port 59810.
16/04/06 15:07:50 INFO spark.SparkEnv: Registering MapOutputTracker
16/04/06 15:07:50 INFO spark.SparkEnv: Registering BlockManagerMaster
16/04/06 15:07:50 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-eb699480-b70b-4273-b8e9-433764a62c0e/blockmgr-22f931fa-c02a-409e-8acc-af6b4ce3f309
16/04/06 15:07:50 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
16/04/06 15:07:51 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-eb699480-b70b-4273-b8e9-433764a62c0e/httpd-d5ad4baa-989e-4731-9944-652645c86fef
16/04/06 15:07:51 INFO spark.HttpServer: Starting HTTP Server
16/04/06 15:07:51 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/04/06 15:07:51 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:47726
16/04/06 15:07:51 INFO util.Utils: Successfully started service 'HTTP file server' on port 47726.
16/04/06 15:07:51 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/04/06 15:07:51 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/04/06 15:07:51 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/04/06 15:07:51 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/04/06 15:07:51 INFO ui.SparkUI: Started SparkUI at http://192.168.13.43:4040
16/04/06 15:07:52 INFO client.AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@master.casking.com:7077/user/Master...
16/04/06 15:07:52 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160406150752-0004
16/04/06 15:07:52 INFO client.AppClient$ClientActor: Executor added: app-20160406150752-0004/0 on worker-20160406135801-192.168.13.45-47341 (192.168.13.45:47341) with 1 cores
16/04/06 15:07:52 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160406150752-0004/0 on hostPort 192.168.13.45:47341 with 1 cores, 512.0 MB RAM
16/04/06 15:07:52 INFO client.AppClient$ClientActor: Executor added: app-20160406150752-0004/1 on worker-20160406135801-192.168.13.44-60023 (192.168.13.44:60023) with 1 cores
16/04/06 15:07:52 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160406150752-0004/1 on hostPort 192.168.13.44:60023 with 1 cores, 512.0 MB RAM
16/04/06 15:07:52 INFO client.AppClient$ClientActor: Executor updated: app-20160406150752-0004/1 is now LOADING
16/04/06 15:07:52 INFO client.AppClient$ClientActor: Executor updated: app-20160406150752-0004/0 is now LOADING
16/04/06 15:07:52 INFO client.AppClient$ClientActor: Executor updated: app-20160406150752-0004/0 is now RUNNING
16/04/06 15:07:52 INFO client.AppClient$ClientActor: Executor updated: app-20160406150752-0004/1 is now RUNNING
16/04/06 15:07:53 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51708.
16/04/06 15:07:53 INFO netty.NettyBlockTransferService: Server created on 51708
16/04/06 15:07:53 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/04/06 15:07:53 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.13.43:51708 with 267.3 MB RAM, BlockManagerId(driver, 192.168.13.43, 51708)
16/04/06 15:07:53 INFO storage.BlockManagerMaster: Registered BlockManager
16/04/06 15:07:53 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/04/06 15:07:53 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.
16/04/06 15:07:55 INFO hive.HiveContext: Initializing execution hive, version 0.13.1
16/04/06 15:07:56 INFO hive.metastore: Trying to connect to metastore with URI thrift://192.168.13.43:9083
16/04/06 15:07:56 INFO hive.metastore: Connected to metastore.
16/04/06 15:07:57 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.13.45:51876/user/Executor#820989273]) with ID 0
16/04/06 15:07:57 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.13.44:38556/user/Executor#-638549016]) with ID 1
16/04/06 15:07:57 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.13.44:35787 with 267.3 MB RAM, BlockManagerId(1, 192.168.13.44, 35787)
16/04/06 15:07:57 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.13.45:33428 with 267.3 MB RAM, BlockManagerId(0, 192.168.13.45, 33428)
16/04/06 15:07:58 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
16/04/06 15:07:58 INFO repl.SparkILoop: Created sql context (with Hive support)..
SQL context available as sqlContext.

scala> 
<think>好的,我需要解决用户提到的Java中的UncheckedExecutionException异常,具体是org.apache.beam包下的,错误信息是java.io.IOExceptionerror=13,权限不足。首先,我得确认这个异常通常是在什么情况下出现的。权限问题通常涉及文件或目录的访问权限,比如程序试图读取或写入一个没有足够权限的文件。接下来,我应该考虑可能的解决方案。首先,检查文件或目录的权限设置。用户可能需要使用chmod命令修改权限,或者在代码中设置正确的权限。另外,可能需要以管理员身份运行程序,比如使用sudo命令。不过,在服务器环境中,这可能不太适用,需要其他方法。然后,检查Beam管道的配置。用户可能没有正确设置临时文件的路径,或者临时目录的权限有问题。建议用户检查PipelineOptions中的tempLocation设置,确保路径存在且有写入权限。另外,安全软件或SELinux可能会阻止程序访问某些资源,需要暂时禁用或调整策略。还有,检查文件路径是否正确,避免拼写错误或路径不存在的情况。文件句柄或资源泄漏也可能导致问题,需要确保正确关闭所有资源。最后,查看Beam和Java的文档,确认是否有特定版本或配置要求。可能需要升级Beam版本或调整JVM安全策略。需要一步步引导用户检查这些方面,并提供具体的命令和代码示例,比如如何设置文件权限,如何配置PipelineOptions等。同时,提醒用户注意安全风险,比如谨慎使用chmod777,以及正确处理敏感操作。</think>### 解决Java中UncheckedExecutionException权限问题的步骤 #### 1. **确认文件/目录权限** - **问题定位**:错误`java.io.IOException: error=13, Permission denied`表明程序无法访问指定路径的文件或目录。 - **解决方案**: - 通过命令行检查目标路径权限: ```bash ls -l /path/to/target/file_or_directory ``` - 修改权限(示例): ```bash chmod 755 /path/to/target # 赋予读写执行权限 ``` - 在代码中动态设置权限(Java示例): ```java File file = new File("/path/to/target"); file.setReadable(true, false); // 对所有用户可读 file.setWritable(true, false); // 对所有用户可写 ``` #### 2. **以管理员身份运行程序** - **适用场景**:本地开发环境或测试环境。 - **操作示例**: ```bash sudo java -jar your_beam_app.jar # Linux/macOS ``` - 注意:生产环境中需谨慎使用此方法,应通过系统权限配置解决。 #### 3. **检查Beam管道配置** - **关键配置项**:Apache Beam的`PipelineOptions`中需正确设置临时文件路径。 - **代码调整示例**: ```java PipelineOptions options = PipelineOptionsFactory.create(); options.setTempLocation("gs://bucket/temp"); // GCP存储路径 // 或本地路径(确保存在且可写) options.setTempLocation("/var/beam/temp"); ``` - **验证步骤**: - 手动创建目录并赋予权限: ```bash mkdir -p /var/beam/temp && chmod 777 /var/beam/temp ``` #### 4. **检查安全软件/SELinux限制** - **临时排查**: - 关闭SELinux(Linux): ```bash setenforce 0 # 临时禁用 ``` - 观察错误是否消失,若消失则需调整SELinux策略: ```bash audit2allow -a # 生成策略模块 ``` #### 5. **验证文件路径有效性** - **代码防御性检查**: ```java Path targetPath = Paths.get("/path/to/target"); if (!Files.exists(targetPath)) { Files.createDirectories(targetPath); // 自动创建目录 } if (!Files.isWritable(targetPath)) { throw new IOException("路径不可写: " + targetPath); } ``` #### 6. **检查资源泄漏** - **诊断工具**: - 使用`lsof`查看进程打开的文件: ```bash lsof -p <java_pid> ``` - 在代码中确保关闭所有`Closeable`资源: ```java try (BufferedWriter writer = Files.newBufferedWriter(path)) { writer.write(data); } // 自动关闭 ``` #### 7. **升级Beam版本或调整JVM策略** - **版本兼容性**: - 检查Beam官方文档中与Java版本的兼容性[^1]。 - 升级Beam至最新稳定版: ```xml <!-- Maven示例 --> <dependency> <groupId>org.apache.beam</groupId> <artifactId>beam-sdks-java-core</artifactId> <version>2.40.0</version> </dependency> ``` - **JVM策略调整**: - 修改`java.policy`文件(位于`$JAVA_HOME/conf/security`): ``` grant { permission java.io.FilePermission "/path/to/target/*", "read,write"; }; ``` #### 8. **日志分析与调试** - **启用详细日志**: ```java options.setRunner(DirectRunner.class); // 使用直接运行器便于调试 Logger.getGlobal().setLevel(Level.ALL); // 输出全部日志 ``` - **关键日志信息**: - 检查日志中具体的权限拒绝路径,可能是隐藏的中间文件(如Beam生成的临时状态文件)。 --- ### 典型场景示例 **场景**:Beam管道写入HDFS时报错 **解决步骤**: 1. 确认HDFS目录权限: ```bash hdfs dfs -ls /user/beam/output hdfs dfs -chmod 755 /user/beam/output ``` 2. 在代码中指定HDFS配置: ```java Configuration hdfsConf = new Configuration(); hdfsConf.set("fs.defaultFS", "hdfs://namenode:8020"); options.setHdfsConfiguration(hdfsConf); ``` ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

易悠

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值