Launching Applications with spark-submit【使用脚本提交作业到集群5种部署模式--】

本文介绍了如何使用Spark的bin/spark-submit脚本部署和提交应用程序。详细解释了各种选项如--class、--master和--deploy-mode等的作用,并提供了不同场景下的示例,包括本地运行、Spark独立集群、YARN集群及Mesos集群等。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Once a user application is bundled, it can be launched using the bin/spark-submit script.This script takes care of setting up the classpath with Spark and itsdependencies, and can support different cluster managers and deploy modes that Spark supports:

./bin/spark-submit \
  --class <main-class> \
  --master <master-url> \
  --deploy-mode <deploy-mode> \
  --conf <key>=<value> \
  ... # other options
  <application-jar> \
  [application-arguments]

Some of the commonly used options are:

  • --class: The entry point for your application (e.g. org.apache.spark.examples.SparkPi)
  • --master: The master URL for the cluster (e.g. spark://23.195.26.187:7077)
  • --deploy-mode: Whether to deploy your driver on the worker nodes (cluster) or locally as an external client (client) (default: client)
  • --conf: Arbitrary Spark configuration property in key=value format. For values that contain spaces wrap “key=value” in quotes (as shown).
  • application-jar: Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your cluster, for instance, an hdfs:// path or a file:// path that is present on all nodes.
  • application-arguments: Arguments passed to the main method of your main class, if any

A common deployment strategy is to submit your application from a gateway machinethat isphysically co-located with your worker machines (e.g. Master node in a standalone EC2 cluster).In this setup, client mode is appropriate. In client mode, the driver is launched directlywithin the spark-submit process which acts as a client to the cluster. The input andoutput of the application is attached to the console. Thus, this mode is especially suitablefor applications that involve the REPL (e.g. Spark shell).

Alternatively, if your application is submitted from a machine far from the worker machines (e.g.locally on your laptop), it is common to use cluster mode to minimize network latency betweenthe drivers and the executors. Currently, standalone mode does not support cluster mode for Pythonapplications.

For Python applications, simply pass a .py file in the place of <application-jar> instead of a JAR,and add Python .zip, .egg or .py files to the search path with --py-files.

There are a few options available that are specific to thecluster manager that is being used.For example, with a Spark standalone cluster with cluster deploy mode,you can also specify --supervise to make sure that the driver is automatically restarted if itfails with non-zero exit code.【高可用的一个配置参数】 To enumerate all such options available to spark-submit,run it with --help. Here are a few examples of common options:

# Run application locally on 8 cores
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master local[8] \
  /path/to/examples.jar \
  100

# Run on a Spark standalone cluster in client deploy mode
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://207.184.161.138:7077 \
  --executor-memory 20G \
  --total-executor-cores 100 \
  /path/to/examples.jar \
  1000

# Run on a Spark standalone cluster in cluster deploy mode with supervise
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://207.184.161.138:7077 \
  --deploy-mode cluster \
  --supervise \
  --executor-memory 20G \
  --total-executor-cores 100 \
  /path/to/examples.jar \
  1000

# Run on a YARN cluster
export HADOOP_CONF_DIR=XXX
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master yarn \
  --deploy-mode cluster \  # can be client for client mode
  --executor-memory 20G \
  --num-executors 50 \
  /path/to/examples.jar \
  1000

# Run a Python application on a Spark standalone cluster
./bin/spark-submit \
  --master spark://207.184.161.138:7077 \
  examples/src/main/python/pi.py \
  1000

# Run on a Mesos cluster in cluster deploy mode with supervise
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master mesos://207.184.161.138:7077 \
  --deploy-mode cluster \
  --supervise \
  --executor-memory 20G \
  --total-executor-cores 100 \
  http://path/to/examples.jar \
  1000
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值