安装spark-2.2.2-bin-hadoop2.7:https://blog.youkuaiyun.com/drl_blogs/article/details/91948394
1.编辑 主节点conf/spark-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_211
# export SPARK_MASTER_HOST=hadoop01
# export SPARK_MASTER_PORT=7077
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop01:2181,hadoop02:2181,hadoop03:2181 -Dspark.deploy.zookeeper.dir=/spark"
2.发送给其他节点
scp -r /usr/local/spark-2.2.2-bin-hadoop2.7/conf/spark-env.sh hadoop02:/usr/local/spark-2.2.2-bin-hadoop2.7/conf/
scp -r /usr/local/spark-2.2.2-bin-hadoop2.7/conf/spark-env.sh hadoop03:/usr/local/spark-2.2.2-bin-hadoop2.7/conf/
3.启动
1)主节点启动
start-all.sh
2)备用节点启动
start-master.sh
4.OK
本文详细介绍了如何在Hadoop环境下安装配置Spark 2.2.2版本,包括编辑主节点的spark-env.sh文件,设置JAVA_HOME及Zookeeper相关参数,以及如何将配置文件分发到从节点,并启动Spark集群。
7434

被折叠的 条评论
为什么被折叠?



