Spark on YARN Overview
MR:base-process
each task in its own process:MapTask ReduceTask process
when a task completes,the process goes away
Spark:base-thread
many tasks can run concurrently in a single process
this process sticks around for the lifetime of the Spark Application even no jobs are running
adventage:
speed,
tasks can start up very quickly
in memory
Cluster Manager
Spark Application ==> CM
Local standalone YARN Mesos K8S ==>Pluggable (可插拔的)
ApplicationMaster:AM
YARN Application ==>AM(first container)
Worker:
YARN 中是没有这个概念的 因为 executor runs in container(memory of container > executor memory)
客户端提交作业(包含咋driver里面),先去RM->AM->AM申请资源->启动container运行executor
Spark 仅仅只是一个客户端而已
基于YARN的Resource Manager的Client模式:driver运行在client process。这样AM的职责就只是申请资源
spark-submit --master yarn
或者 spark-submit --master yarn --deploy-mode client
基于YARN的Resource Manager的Cluster模式:driver在AM中,当driver在再AM中初始化完成后client可以go away。AM不仅要申请资源,还需要进行task schedule(本来由driver做,但跑在AM中,所以一块了)
spark-submit --master yarn --deploy-mode cluster
官方说把driver运行到靠近worker节点性能更好一点,
但是日志怎么查看?
集群模式下查看日志:
yarn logs -applicationId
–master yarn 不能用spark-shell起
options:
–driver-memory MEN 默认1024G
–executor-memory MEN 默认1G
cluster deploy mode only:
–driver-cores NUM 默认1
yarn only
–executor-cores NUM 默认1
–num–executors num 默认2
以下是一些Spark On Yarn相关的配置参数:
spark.yarn.am.memory
默认值:512M
在yarn-client模式下,申请Yarn App Master所用的内存。
spark.driver.memory
默认值:512M
在yarn-cluster模式下,申请Yarn App Master(包括Driver)所用的内存。
spark.yarn.am.cores
默认值:1
在yarn-client模式下,申请Yarn App Master所用的CPU核数
spark.driver.cores
默认值:1
在yarn-cluster模式下,申请Yarn App Master(包括Driver)所用的CPU核数。
https://blog.youkuaiyun.com/guohecang/article/details/52579270
https://www.jianshu.com/p/702068910f5b
https://www.jianshu.com/p/b9ec3c2ff8dd