Using a Mesos Master URL
The Master URLs for Mesos are in the form mesos://host:5050 for a single-master Mesos cluster, ormesos://zk://host:2181 for a multi-master Mesos cluster using ZooKeeper.
The driver also needs some configuration in spark-env.sh to interact properly with Mesos:
- In
spark.env.shset some environment variables:-
export MESOS_NATIVE_LIBRARY=<path to libmesos.so>. This path is typically<prefix>/lib/libmesos.sowhere the prefix is/usr/localby default. See Mesos installation instructions above. On Mac OS X, the library is calledlibmesos.dylibinstead oflibmesos.so. -
export SPARK_EXECUTOR_URI=<URL of spark-1.0.1.tar.gz uploaded above>.
-
- Also set
spark.executor.urito<URL of spark-1.0.1.tar.gz>.
Now when starting a Spark application against the cluster, pass a mesos:// URL as the master when creating aSparkContext. For example:
val conf = new SparkConf()
.setMaster("mesos://HOST:5050")
.setAppName("My app")
.set("spark.executor.uri", "<path to spark-1.0.1.tar.gz uploaded above>")
val sc = new SparkContext(conf)
(You can also use spark-submit and configure spark.executor.uri in the conf/spark-defaults.conf file. Note that spark-submit currently only supports deploying the Spark driver in client mode for Mesos.)
When running a shell, the spark.executor.uri parameter is inherited from SPARK_EXECUTOR_URI, so it does not need to be redundantly passed in as a system property.
./bin/spark-shell --master mesos://host:5050
Mesos Run Modes
Spark can run over Mesos in two modes: “fine-grained” (default) and “coarse-grained”.
In “fine-grained” mode (default), each Spark task runs as a separate Mesos task. This allows multiple instances of Spark (and other frameworks) to share machines at a very fine granularity, where each application gets more or fewer machines as it ramps up and down, but it comes with an additional overhead in launching each task. This mode may be inappropriate for low-latency requirements like interactive queries or serving web requests.
The “coarse-grained” mode will instead launch only one long-running Spark task on each Mesos machine, and dynamically schedule its own “mini-tasks” within it. The benefit is much lower startup overhead, but at the cost of reserving the Mesos resources for the complete duration of the application.
To run in coarse-grained mode, set the spark.mesos.coarse property in your SparkConf:
conf.set("spark.mesos.coarse", "true")
In addition, for coarse-grained mode, you can control the maximum number of resources Spark will acquire. By default, it will acquire all cores in the cluster (that get offered by Mesos), which only makes sense if you run just one application at a time. You can cap the maximum number of cores using conf.set("spark.cores.max", "10") (for example).
启动spark:
./bin/spark-shell --master mesos://127.0.1.1:5050
#测试
scala> val file = sc.textFile("hdfs://hadoop-master:9000/tmp/WifiScan_None_20140723.csv")
scala> val count=file.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)
scala> count.count()http://spark.apache.org/docs/latest/running-on-mesos.html
1854

被折叠的 条评论
为什么被折叠?



