https://www.jianshu.com/p/314129ceb883
获取docker镜像
sudo docker pull sequenceiq/spark:1.6.0
运行docker容器
sudo docker run -it --name spark --rm sequenceiq/spark:1.6.0 /bin/bash
运行作业
$ cd /usr/local/spark
$ bin/spark-submit --master yarn-client --class org.apache.spark.examples.JavaWordCount lib/spark-examples-1.6.0-hadoop2.6.0.jar file:/usr/local/hadoop/input/
我们也可以把启动容器和运行作业放在一起,比如:
sudo docker run -it --name spark --rm sequenceiq/spark:1.6.0 sh -c "\"spark-submit --master y