本篇主要记录看源码的一个执行过程路径,简要记录,方便以后理清思路,或者给正在看源码的提供一个思路。还是对着源码看看相信会有很大的收获。
入口
spark 提交的任务入口都是SparkSubmit,从SparkSubmit.scala main class开始
main
==>
submit(appArgs)
==>
prepareSubmitEnvironment
准备提交环境变量等,主要是判断是那种语言,java python R ,判断master是yarn 还是 mesos, 判断那种执行模式 client 还是cluster ,会解析得到不同的mainCalss. 这里on yarn Cluster的 main class是org.apache.spark.deploy.yarn.Client 。 on yarn client模式 的main class就是用户指定的 --class 参数
duRunMain -->
runMain //得到main class , main method 然后调用反射mainMethod.invoke 开始执行
yarn-cluster模式
从org.apache.spark.deploy.yarn.Client类的main class开始执行
main ==>
run ===>
submitApplication
val containerContext = createContainerLaunchContext(newAppResponse)//构建启动APPmaster的 command
val appContext = createApplicationSubmissionContext(newApp, containerContext)
// Finally, submit and monitor the application
logInfo(s"Submitting application $appId to ResourceManager")
yarnClient.submitApplication(appContext)//调用yarnclient提交任务
现在就到了AppMaster启动了
ApplicationMaster.scala main ===>
run () ===>
if (isClusterMode) {//区分cluster和clinet模式
runDriver(securityMgr)//启动Driver
} else {
runExecutorLauncher(securityMgr)
}
==>runDriver
===>userClassThread = startUserApplication()//启动用户设置class
spark submit断开最后然后断开,提交完成
yarn-client模式
入口调用用户class的类
在sparkContext初始化的过程中,会初始化TaskScduler的实现类,
val (sched, ts) = SparkContext.createTaskScheduler(this, master, deployMode)
backend在yarn client模式下为YarnClientSchedulerBackend
taskScheduler在yarn clinet模式下为 YarnScheduler
taskScheduler.start时会启动backend.start
在YarnClientSchedulerBackend的start方法中,在start中会new一个client对象,调用提交submitApplication方法
client = new Client(args, conf)
bindToYarn(client.submitApplication(), None)
===> submitApplication
===> yarnClient.submitApplication
在Appmaster的main calss中
ApplicationMaster.scala main ===>
run () ===>
if (isClusterMode) {//区分cluster和clinet模式
runDriver(securityMgr)//启动Driver
} else {
runExecutorLauncher(securityMgr)
}
client模式调的是runExecutorLauncher
appmaster和client建立rpc连接,等待client将driver启动
对比
//cluster
rpcEnv = sc.env.rpcEnv
val driverRef = runAMEndpoint(
sc.getConf.get("spark.driver.host"),
sc.getConf.get("spark.driver.port"),
isClusterMode = true)//启动driver host port
registerAM(sc.getConf, rpcEnv, driverRef, sc.ui.map(_.appUIAddress).getOrElse(""),
securityMgr)
//client
rpcEnv = RpcEnv.create("sparkYarnAM", Utils.localHostName, port, sparkConf, securityMgr,
clientMode = true)
val driverRef = waitForSparkDriver()//启动driver localhost
addAmIpFilter()
registerAM(sparkConf, rpcEnv, driverRef, sparkConf.get("spark.driver.appUIAddress", ""),
securityMgr)//注册AM
对比就是driver启动的地方区别,cluster模式和appmaster在一起,client模式是localhost
main class
SparkPi.main
===>
rdd.reduce
===>
sc.runjob
===>
scheduler.runjob
===>
submitJob
===>
submitJobHandle
===>
submitStage
===>
submitMissingTask //for 循环递归调用
===>
taskScheduler.submitTasks(taskSet)
===>
backend.reviveOffers()//发送reviveOffers消息给CoarseGrainedExecutorBackend
===>
backend端
case ReviveOffers =>
makeOffers()
===>
launchTasks
将Task序列化发送到各个Executor执行,Task提交就完成了。