执行:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://192.168.0.63:7077 --executor-memory 10G --total-executor-cores 100 examples/lib/spark-examples_2.10-1.0.0-cdh5.1.0.jar 1000
报如下错误解决:
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
2014-10-13 09:52:35,142 ERROR akka.remote.EndpointWriter: AssociationError [akka.tcp://sparkWorker@namenode1:7078] -> [akka.tcp://sparkExecutor@namenode1:37398]: Error [Association failed with [akka.tcp://sparkExecutor@namenode1:37398]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor@namenode1:37398]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: namenode1/192.168.0.60:37398
]
2014-10-13 09:52:35,150 ERROR akka.remote.EndpointWriter: AssociationError [akka.tcp://sparkWorker@namenode1:7078] -> [akka.tcp://sparkExecutor@namenode1:37398]: Error [Association failed with [akka.tcp://sparkExecutor@namenode1:37398]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor@namenode1:37398]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: namenode1/192.168.0.60:37398
org.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.
executor-memory
不能大于worker_max_heapsize,
192.168.0.63:7077改成datanode3:7077
本文记录了一次使用SparkPi执行任务时遇到的问题及解决过程。主要错误包括资源未被接受警告、连接拒绝错误以及所有Master节点无响应导致的任务中止。文章详细分析了错误原因,并给出了解决方案,即调整executor-memory参数不超过worker的最大堆内存限制,并更改Master节点地址。
3426

被折叠的 条评论
为什么被折叠?



