spark原理架构

架构图:

这里写图片描述

说明:

 client使用spark-submit使用standalone提交一个job到spark集群的一个机器上,会通过反射生成一个DriverActor进程,在这个Driver中将会执行我们所写的代码,首先初始化 SparkContxt使用createTaskScheduler方法,源码如下:
 //standalone方式运行
      case SPARK_REGEX(sparkUrl) =>
        val scheduler = new TaskSchedulerImpl(sc)
        val masterUrls = sparkUrl.split(",").map("spark://" + _)
        val backend = new StandaloneSchedulerBackend(scheduler, sc, masterUrls)
        scheduler.initialize(backend)
        (backend, scheduler)
1.在初始化sparkContext时会创建TaskSchedulerImpl和StandaloneSchedulerBackend对象,并利用TaskScheduler对象初始化backend,

2.在initialize方法中我们会开启backend对象线程,并且创建DagScheduler和TaskScheduler两个对象,进行任务的调度和stage的构建,

3.taskScheduler负责连接到master向master注册app,master接受到了app的注册后,会使用自己的调度算法在集群的work上为app启动多个excutor,源码:
调度初始化
 private def scheduleExecutorsOnWorkers(
      app: ApplicationInfo,
      usableWorkers: Array[WorkerInfo],
      spreadOutApps: Boolean): Array[Int] = {
    val coresPerExecutor = app.desc.coresPerExecutor
    val minCoresPerExecutor = coresPerExecutor.getOrElse(1)
    val oneExecutorPerWorker = coresPerExecutor.isEmpty
    val memoryPerExecutor = app.desc.memoryPerExecutorMB
    val numUsable = usableWorkers.length
    val assignedCores = new Array[Int](numUsable) // Number of cores to give to each worker
    val assignedExecutors = new Array[Int](numUsable) // Number of new executors on each worker
    var coresToAssign = math.min(app.coresLeft, usableWorkers.map(_.coresFree).sum)

    /** Return whether the specified worker can launch an executor for this app. */
    def canLaunchExecutor(pos: Int): Boolean = {
      val keepScheduling = coresToAssign >= minCoresPerExecutor
      val enoughCores = usableWorkers(pos).coresFree - assignedCores(pos) >= minCoresPerExecutor

      // If we allow multiple executors per worker, then we can always launch new executors.
      // Otherwise, if there is already an executor on this worker, just give it more cores.
      val launchingNewExecutor = !oneExecutorPerWorker || assignedExecutors(pos) == 0
      if (launchingNewExecutor) {
        val assignedMemory = assignedExecutors(pos) * memoryPerExecutor
        val enoughMemory = usableWorkers(pos).memoryFree - assignedMemory >= memoryPerExecutor
        val underLimit = assignedExecutors.sum + app.executors.size < app.executorLimit
        keepScheduling && enoughCores && enoughMemory && underLimit
      } else {
        // We're adding cores to an existing executor, so no need
        // to check memory and executor limits
        keepScheduling && enoughCores
      }
    }


 开启在
  private def startExecutorsOnWorkers(): Unit = {
    // Right now this is a very simple FIFO scheduler. We keep trying to fit in the first app
    // in the queue, then the second app, etc.

    for (app <- waitingApps if app.coresLeft > 0) {
      val coresPerExecutor: Option[Int] = app.desc.coresPerExecutor
      // Filter out workers that don't have enough resources to launch an executor
      val usableWorkers = workers.toArray.filter(_.state == WorkerState.ALIVE)
        .filter(worker => worker.memoryFree >= app.desc.memoryPerExecutorMB &&
          worker.coresFree >= coresPerExecutor.getOrElse(1))
        .sortBy(_.coresFree).reverse
      val assignedCores = scheduleExecutorsOnWorkers(app, usableWorkers, spreadOutApps)

      // Now that we've decided how many cores to allocate on each worker, let's allocate them
      for (pos <- 0 until usableWorkers.length if assignedCores(pos) > 0) {
        allocateWorkerResourceToExecutors(
          app, assignedCores(pos), coresPerExecutor, usableWorkers(pos))
      }
    }
  }

调度算法有两种:

将executors安排到workers身上。
 返回一个包含分配给每个worker的内核数的数组。

  有两种启动执行者的模式。第一次尝试展开应用程序
  执行尽可能多的worker,而第二个则相反(即启动他们
  尽可能少的worker。前者通常更适合于数据局部性目的
  默认。

 配给每个executor的内核数是可配置的。当这是显式设置时,
 同一应用程序的多个执行器可能在同一worker上启动
 拥有足够的内核和内存。否则,每个执行器都会捕获所有可用的内核
 默认情况下,在此情况下,每个worker只能执行一个执行者。

 每次只分配一个worker是很重要的(而不是一个核心)
 每次。考虑下面的例子:集群有4个worker,每个人拥有16个内核。
 用户请求3个执行器(spark .内核)。max = 48,spark.executor。核= 16)。如果1核心是
 每次分配给每位执行者12核。
 由于12 < 16,没有执行器会启动[spark - 8881]。

调度算法有两种:

    有两种启动执行者的模式。第一次尝试展开应用程序
    执行尽可能多的员工,而第二个则相反(即启动他们
    尽可能少的工人

4.work接收master的调度后,会在本地启动多个excutor,并且在executor中建立一个线程池;


5.创建完executor后;excutor会反向注册到TaskScheduler,
6.所有的excutor都在TaskScheduler注册完成后,开始执行我们的代码,这是初始化完成
7,执行代码是,一个action操作就是产生一个job
8,job会提交给DagScheduler,将这个任务划分成多个stage,每个stage创建一个taskSet,
9将taskSet提交给TaskScheduler,TaskScheduler回见Taskset提交给excutor,
10.excutor收到task后,会开启一个taskRunner,去执行这个task,
11.所以所有的spark程序就是讲stage分批次的提交各excutor执行,
task有两种,一种就是shuffleTask另一种就是ReduceTask
最后一个就是reduceTask,其他的都是shuffleTask

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值