Step 1:
可以尝试先搭建伪分布式,然后完全分布式只是在这个基础上做了一些修改而已
http://blog.youkuaiyun.com/ymf827311945/article/details/73733916
Step 2:
在node11节点上执行命令:
vi /opt/apps/spark/spark-1.6.0-bin-hadoop2.6/conf/spark-env.sh
添加如下属性:
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=node11:2181,node12:2181,node13:2181"
Step 3:
执行命令:
scp /opt/apps/spark/spark-1.6.0-bin-hadoop2.6/conf/spark-env.sh node12:/opt/apps/spark/spark-1.6.0-bin-hadoop2.6/conf/spark-env.sh
scp /opt/apps/spark/spark-1.6