spark standalone集群部署
只需ssh+jdk+spark-with-hadoop+[zookeeper(HA)]
spark on yarn集群部署
需要ssh+jdk+hadoop+yarn+spark+[zookeeper(HA)]
spark-shell --master yarn
查看集群情况
http://:4040/
http://:8080/
集群内测试:
bin/spark-submit --class org.apache.spark.examples.JavaSparkPi --master spark://cdh1:7077 examples/jars/spark-examples_2.11-2.4.0.jar 2
run-example SparkPi 10
val textFile = spark.read.textFile(“file:///root/abc”) #spark读取本地文件 每个节点 同样路径 都要有数据文件 ,HDFS可以共享,有一份就可以了
textFile.count()
val nums = sc.parallelize(List(1,2,3,4,5))
nums.count()
集群外客户端测试
bin/spark-submit
–class org.apache.spark.examples.JavaSparkPi
–master spark://cdh1:7077
–total-executor-cores 2
–num-executors 1
–driver-memory 512m
–executor-memory 512m
–executor-cores 1
examples/jars/spark-examples_2.11-2.4.0.jar 2
或者
bin/spark-submit --class org.apache.spark.examples.JavaSparkPi --master spark://cdh1:7077 --driver-memory 512m --executor-memory 512m --total-executor-cores 2 --executor-cores 1 examples/jars/spark-examples_2.11-2.4.0.jar 2
或者
bin/spark-submit --class org.apache.spark.examples.JavaSparkPi --master spark://cdh1:7077 examples/jars/spark-examples_2.11-2.4.0.jar 2
#注意,要先在客户集群中每个节点的hosts添加客户机的ip-主机名
官方方法
http://spark.incubator.apache.org/docs/2.4.0/quick-start.html
官方部署说明
http://spark.incubator.apache.org/docs/2.4.0/spark-standalone.html#installing-spark-standalone-to-a-cluster
spark高可用参考
https://www.cnblogs.com/phy2020/p/12723547.html