Spark集群安装(h15\h16\h18上)
注意:一定要两两之间设置免密登陆
1\配置zookeeper成功后,下载spark-1.3.1上传解压,为了避免命令和hadoop冲突,不用配置环境变量
2\先配置h15,
(1)#cp slaves.template slaves
(2)#cp spark-env.sh.template spark-env.sh
(3)修改文件/home/spark-1.3.1-bin-hadoop2.4/conf/slaves(配置worker所在机器)
# A Spark Worker will be started on each of the machines listed below.
#localhost
h16
h18
3\修改文件/home/spark-1.3.1-bin-hadoop2.4/conf/spark-env.sh
#!/usr/bin/env bash