spark学习过程

这篇博客记录了在Spark学习过程中遇到的集群运行错误和提交作业异常,包括使用`run-example`命令时的参数错误,以及使用`spark-submit`提交到YARN集群时的配置问题。通过调整命令参数,最终成功运行SparkPi示例。

spark学习之路,这里就用来记录spark学习及实践过程中遇到的问题

1、在集群中运行spark自带的例子时,出现了错误

执行如下命令

$ ./bin/run-example org.apache.spark.examples.SparkPi   spark://hadoop11:9090

出现了如下异常

Exception in thread "main" java.lang.NumberFormatException: For input string: "spark://hadoop11:9090"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
at java.lang.Integer.parseInt(Integer.java:449)
at java.lang.Integer.parseInt(Integer.java:499)
at scala.collection.immutable.StringLike$class.toInt(StringLike.scala:229)
at scala.collection.immutable.StringOps.toInt(StringOps.scala:31)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:29)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:292)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

后来把命令改为

$ ./bin/run-example org.apache.spark.examples.SparkPi  2  spark://hadoop11:9090

就执行成功了,若要在本地运行则

$ ./bin/run-example org.apache.spark.examples.SparkPi  2  local

2、用./spark-submit方式提交作业

 ./spark-submit --class org.apache.spark.examples.SparkPi --master spark://hadoop11:7777  /home/hadoop/cloudera5/spark-1.0.0-bin-hadoop2/lib/spark-examples-1.0.0-hadoop2.2.0.jar   2

结果出现异常:如下

14/12/30 10:04:26 WARN client.AppClient$ClientActor: Could not connect to akka.tcp://sparkMaster@hadoop11:7777: akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkMaster@hadoop11:7777]

Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1033)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1017)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1015)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1015)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:633)
at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1207)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

后来发现问题,由于我们的集群是基于yarn的模式,所以是命令中的红色字体部分不对,后改为如下,则运行成功

./spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster  /home/hadoop/cloudera5/spark-1.0.0-bin-hadoop2/lib/spark-examples-1.0.0-hadoop2.2.0.jar 2

传递给spark的master url的方式可以参考点击打开链接

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值