Spark 提交应用

Spark Submitting Applications

Sparkbin目录中的Spark -submit脚本用于在集群上启动应用程序。它可以通过一个统一的接口使用所有Spark支持的集群管理器,这样您就不必配置您的应用程序,尤其是对每个应用程序dd

Bundling Your Application’s Dependencies

如果您的代码依赖于其他项目,您将需要将它们与应用程序一起打包,以便将代码分发到Spark集群中

Launching Applications with spark-submit

./bin/spark-submit \

  --class <main-class> \

  --master <master-url> \

  --deploy-mode <deploy-mode> \

  --conf <key>=<value> \

  ... # other options

  <application-jar> \

  [application-arguments]


Some of the commonly used options are:

  • --class: The entry point for your application (e.g. org.apache.spark.examples.SparkPi)
  • --master: The master URL for the cluster (e.g. spark://23.195.26.187:7077)
  • --deploy-mode: Whether to deploy your driver on the worker nodes (cluster) or locally as an external client (client) (default: client)
  • --conf: Arbitrary Spark configuration property in key=value format. For values that contain spaces wrap “key=value” in quotes (as shown).
  • application-jar: Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your cluster, for instance, an hdfs:// path or a file:// path that is present on all nodes.
  • application-arguments: Arguments passed to the main method of your main class, if any


 Run application locally on 8 cores

./bin/spark-submit \

  --class org.apache.spark.examples.SparkPi \

  --master local[8] \

  /path/to/examples.jar \

  100


# Run on a Spark standalone cluster in client deploy mode

./bin/spark-submit \

  --class org.apache.spark.examples.SparkPi \

  --master spark://207.184.161.138:7077 \

  --executor-memory 20G \

  --total-executor-cores 100 \

  /path/to/examples.jar \

  1000


# Run on a Spark standalone cluster in cluster deploy mode with supervise

./bin/spark-submit \

  --class org.apache.spark.examples.SparkPi \

  --master spark://207.184.161.138:7077 \

  --deploy-mode cluster \

  --supervise \

  --executor-memory 20G \

  --total-executor-cores 100 \

  /path/to/examples.jar \

  1000


# Run on a YARN cluster

export HADOOP_CONF_DIR=XXX

./bin/spark-submit \

  --class org.apache.spark.examples.SparkPi \

  --master yarn \

  --deploy-mode cluster # can be client for client mode

  --executor-memory 20G \

  --num-executors 50 \

  /path/to/examples.jar \

  1000


# Run a Python application on a Spark standalone cluster

./bin/spark-submit \

  --master spark://207.184.161.138:7077 \

  examples/src/main/python/pi.py \

  1000


# Run on a Mesos cluster in cluster deploy mode with supervise

./bin/spark-submit \

  --class org.apache.spark.examples.SparkPi \

  --master mesos://207.184.161.138:7077 \

  --deploy-mode cluster \

  --supervise \

  --executor-memory 20G \

  --total-executor-cores 100 \

  http://path/to/examples.jar \

  1000


# Run on a Kubernetes cluster in cluster deploy mode

./bin/spark-submit \

  --class org.apache.spark.examples.SparkPi \

  --master k8s://xx.yy.zz.ww:443 \

  --deploy-mode cluster \

  --executor-memory 20G \

  --num-executors 50 \

  http://path/to/examples.jar \

  1000

SpaSparkbin

rkbin


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值