Spark源码阅读02-Spark核心原理之调度算法

//支持FIFO和FAIR(公平调度)两种

schedulableBuilder.addTaskSetManager(manager, manager.taskSet.properties)

}

}

3.提供已排序的任务集管理器

在TaskSchedulerImpl.resourceOffers方法中进行资源分配时,会从根调度池rootPools获取已经排序的任务管理器,该排序算法由两种调度策略FIFOSchedulingAlgorithm和FairSchedulingAlgorithm的comparator方法提供。代码实现如下:

def resourceOffers(offers: IndexedSeq[WorkerOffer]): Seq[Seq[TaskDescription]] = synchronized {

//获取按照资源调度策略排序好的TaskSetManager

val sortedTaskSets = rootPool.getSortedTaskSetQueue

}

(1)FIFO调度策略实现代码如下:

private[spark] class FIFOSchedulingAlgorithm extends SchedulingAlgorithm {

override def comparator(s1: Schedulable, s2: Schedulable): Boolean = {

//获取作业优先级,实际上是作业编号

val priority1 = s1.priority

val priority2 = s2.priority

var res = math.signum(priority1 - priority2)

//如果是同一个作业,再比较调度阶段优先级

if (res == 0) {

val stageId1 = s1.stageId

val stageId2 = s2.stageId

res = math.signum(stageId1 - stageId2)

}

res < 0

}

}

(2)FIAR调度策略实现代码如下:

private[spark] class FairSchedulingAlgorithm extends SchedulingAlgorithm {

//比较两个调度优先级方法,返回true表示前者优先级高,false表示后者优先级高

override def comparator(s1: Schedulable, s2: Schedulable): Boolean = {

//最小任务数

val minShare1 = s1.minShare

val minShare2 = s2.minShare

//正在运行的任务数

val runningTasks1 = s1.runningTasks

val runningTasks2 = s2.runningTasks

//饥饿程序,判断标准为正在运行的任务数是否小于最小任务数

val s1Needy = runningTasks1 < minShare1

val s2Needy = runningTasks2 < minShare2

//资源比,正在运行的任务数/最小任务数

val minShareRatio1 = runningTasks1.toDouble / math.max(minShare1, 1.0)

val minShareRatio2 = runningTasks2.toDouble / math.max(minShare2, 1.0)

//权重比,正在运行的任务数/任务的权重

val taskToWeightRatio1 = runningTasks1.toDouble / s1.weight.toDouble

val taskToWeightRatio2 = runningTasks2.toDouble / s2.weight.toDouble

var compare = 0

//判断执行

if (s1Needy && !s2Needy) {

return true

} else if (!s1Needy && s2Needy) {

return false

} else if (s1Needy && s2Needy) {

compare = minShareRatio1.compareTo(minShareRatio2)

} else {

compare = taskToWeightRatio1.compareTo(taskToWeightRatio2)

}

if (compare < 0) {

true

} else if (compare > 0) {

false

} else {

s1.name < s2.name

}

}

}

任务之间


在介绍任务调度算法之前,首先介绍下数据本地性和延迟执行两个概念。

1.数据本地性

数据的计算尽可能的在数据所在的节点上进行,这样可以减少数据在网络上传输,以此减少移动数据代价。数据如果在运行节点的内存中,就能够进一步减少磁盘I/O的传输。在Spark中数据本地行优先级从高到低为 ,即最好是任务运行的节点内存中存在数据、次好是同一个Node(同一机器)上,再次是同机架,最后是任意位置。其中任务数据本地性通过以下情况确定:

  • 如果任务处于作业开始的调度阶段内,这些任务对应的RDD分区都有首选运行位置,该位置也是任务运行首选位置,数据本地性为NODE_LOCAL

  • 如果任务处于非作业开头的调度阶段,可以根据父调度阶段运行的位置得到任务的首选位置,这种情况下,如果executor处于活动状态,则数据本地性PROCESS_LOCAL;如果executor不处于活动状态,但存在父调度阶段运行结果,则数据本地性为NODE_LOCAL

  • 如果没有首选位置,则数据本地性为NO_PREF.

2.延迟执行

在任务分配运行节点时,先判断任务最佳运行节点是否空闲,如果该节点没有足够的资源运行该任务,在这种情况下需要等待一段时间;如果在等待时间内该节点释放出足够的资源,则任务在该节点运行,如果还是不足会找出次佳的节点进行运行。通过这样的方式进行能让任务运行在更高级别数据本地性的节点,从而减少磁盘I/O和网络传输。

  • Spark任务分配的原则就是让任务运行在数据本地行优先级别高的节点上,甚至可以为此等待一段时间。

3.任务执行调度算法

在任务分配中TaskSetManager是核心对象,先在其初始化时使用addPendingTask方法,根据任务自身的首选位置得到pendingTasksForExecutor、pendingTasksForHost、pendingTasksForRack、pendingTasksWithNoPrefs4个列表,然后根据这四个列表在computeValidLocalityLevels方法中得到该任务集的数据性本地列表,按照获取的数据本地性从高到低匹配到可用的Worker节点,在匹配前使用getAllowedLocalityLevel得到数据集允许的数据本地性,比较该数据本地行和指定数据本地性优先级,取优先级高的数据本地性;最后在指定的worker节点中判断比较获得数据优先级是否存在需要运行的任务,如果存在则返回该任务和数据本地性进行相关信息更新处理。代码实现如下:

private[spark] def addPendingTask(index: Int) {

for (loc <- tasks(index).preferredLocations) {

loc match {

case e: ExecutorCacheTaskLocation =>

pendingTasksForExecutor.getOrElseUpdate(e.executorId, new ArrayBuffer) += index

case e: HDFSCacheTaskLocation =>

val exe = sched.getExecutorsAliveOnHost(loc.host)

exe match {

case Some(set) =>

for (e <- set) {

pendingTasksForExecutor.getOrElseUpdate(e, new ArrayBuffer) += index

}

logInfo(s"Pending task $index has a cached location at ${e.host} " +

“, where there are executors " + set.mkString(”,"))

case None => logDebug(s"Pending task $index has a cached location at ${e.host} " +

“, but there are no executors alive there.”)

}

case _ =>

}

pendingTasksForHost.getOrElseUpdate(loc.host, new ArrayBuffer) += index

for (rack <- sched.getRackForHost(loc.host)) {

pendingTasksForRack.getOrElseUpdate(rack, new ArrayBuffer) += index

}

}

if (tasks(index).preferredLocations == Nil) {

pendingTasksWithNoPrefs += index

}

allPendingTasks += index // No point scanning this whole list to find the old task there

}

private def computeValidLocalityLevels(): Array[TaskLocality.TaskLocality] = {

import TaskLocality.{PROCESS_LOCAL, NODE_LOCAL, NO_PREF, RACK_LOCAL, ANY}

val levels = new ArrayBuffer[TaskLocality.TaskLocality]

if (!pendingTasksForExecutor.isEmpty &&

pendingTasksForExecutor.keySet.exists(sched.isExecutorAlive(_))) {

levels += PROCESS_LOCAL

}

if (!pendingTasksForHost.isEmpty &&

pendingTasksForHost.keySet.exists(sched.hasExecutorsAliveOnHost(_))) {

levels += NODE_LOCAL

}

if (!pendingTasksWithNoPrefs.isEmpty) {

levels += NO_PREF

}

if (!pendingTasksForRack.isEmpty &&

pendingTasksForRack.keySet.exists(sched.hasHostAliveOnRack(_))) {

levels += RACK_LOCAL

}

levels += ANY

logDebug("Valid locality levels for " + taskSet + “: " + levels.mkString(”, "))

levels.toArray

}

其中resourceoffers方法代码如下:

def resourceOffers(offers: IndexedSeq[WorkerOffer]): Seq[Seq[TaskDescription]] = synchronized {

//为任务随机分配Executor,避免任务集中分配到Worker上

val shuffledOffers = shuffleOffers(filteredOffers)

// Build a list of tasks to assign to each worker.

//用于存储分配好资源任务

val tasks = shuffledOffers.map(o => new ArrayBuffer[TaskDescription](o.cores / CPUS_PER_TASK))

val availableCpus = shuffledOffers.map(o => o.cores).toArray

val availableSlots = shuffledOffers.map(o => o.cores / CPUS_PER_TASK).sum

//获取按照资源调度策略排序好的TaskSetManager

val sortedTaskSets = rootPool.getSortedTaskSetQueue

//如果有新加入的Executor,需要重新计算数据本地性

for (taskSet <- sortedTaskSets) {

logDebug(“parentName: %s, name: %s, runningTasks: %s”.format(

taskSet.parent.name, taskSet.name, taskSet.runningTasks))

} else {

//为分配好的TaskSetManager列表进行分配资源,分配的原则就是就近原则

//按照顺序PROCESS_LOCAL, NODE_LOCAL, NO_PREF, RACK_LOCAL, ANY

var launchedAnyTask = false

// Record all the executor IDs assigned barrier tasks on.

val addressesWithDescs = ArrayBuffer(String, TaskDescription)

for (currentMaxLocality <- taskSet.myLocalityLevels) {

var launchedTaskAtCurrentMaxLocality = false

do {

launchedTaskAtCurrentMaxLocality = resourceOfferSingleTaskSet(taskSet,

currentMaxLocality, shuffledOffers, availableCpus, tasks, addressesWithDescs)

launchedAnyTask |= launchedTaskAtCurrentMaxLocality

} while (launchedTaskAtCurrentMaxLocality)

}

if (!launchedAnyTask) {

taskSet.getCompletelyBlacklistedTaskIfAny(hostToExecutors).foreach { taskIndex =>

executorIdToRunningTaskIds.find(x => !isExecutorBusy(x._1)) match {

case Some ((executorId, _)) =>

if (!unschedulableTaskSetToExpiryTime.contains(taskSet)) {

blacklistTrackerOpt.foreach(blt => blt.killBlacklistedIdleExecutor(executorId))

val timeout = conf.get(config.UNSCHEDULABLE_TASKSET_TIMEOUT) * 1000

unschedulableTaskSetToExpiryTime(taskSet) = clock.getTimeMillis() + timeout

logInfo(s"Waiting for $timeout ms for completely "

  • s"blacklisted task to be schedulable again before aborting $taskSet.")

abortTimer.schedule(

createUnschedulableTaskSetAbortTimer(taskSet, taskIndex), timeout)

}

case None => // Abort Immediately

logInfo(“Cannot schedule any task because of complete blacklisting. No idle” +

s" executors can be found to kill. Aborting $taskSet." )

taskSet.abortSinceCompletelyBlacklisted(taskIndex)

}

}

} else {

s"stage ${taskSet.stageId}.")

}

}

}

// TODO SPARK-24823 Cancel a job that contains barrier stage(s) if the barrier tasks don’t get

// launched within a configured time.

if (tasks.size > 0) {

hasLaunchedTask = true

}

return tasks

}

对于单个任务集的任务调度由TaskSchedulerImpl.resourceOfferSingleTaskSet方法实现。代码如下:

private def resourceOfferSingleTaskSet(

taskSet: TaskSetManager,

maxLocality: TaskLocality,

shuffledOffers: Seq[WorkerOffer],

availableCpus: Array[Int],

tasks: IndexedSeq[ArrayBuffer[TaskDescription]],

addressesWithDescs: ArrayBuffer[(String, TaskDescription)]) : Boolean = {

//遍历所有worker。为每个worker分配运行任务

var launchedTask = false

for (i <- 0 until shuffledOffers.size) {

val execId = shuffledOffers(i).executorId

val host = shuffledOffers(i).host

//当worker的cpu核数满足任务运行要求核数

if (availableCpus(i) >= CPUS_PER_TASK) {

try {

//对指定Executor分配运行的任务,分配后更新相关列表和递减可用CPU

for (task <- taskSet.resourceOffer(execId, host, maxLocality)) {

tasks(i) += task

val tid = task.taskId

taskIdToTaskSetManager.put(tid, taskSet)

taskIdToExecutorId(tid) = execId

executorIdToRunningTaskIds(execId).add(tid)

availableCpus(i) -= CPUS_PER_TASK

assert(availableCpus(i) >= 0)

if (taskSet.isBarrier) {

addressesWithDescs += (shuffledOffers(i).address.get -> task)

}

launchedTask = true

}

} catch {

case e: TaskNotSerializableException =>

logError(s"Resource offer failed, task set ${taskSet.name} was not serializable")

return launchedTask

}

}

}

return launchedTask

}

对指定的worker的executor分配运行的任务调用TaskSetManager.resourceOffer方法实现。代码如下:

def resourceOffer(

execId: String,

host: String,

maxLocality: TaskLocality.TaskLocality)
Option[TaskDescription] =

{

val offerBlacklisted = taskSetBlacklistHelperOpt.exists { blacklist =>

blacklist.isNodeBlacklistedForTaskSet(host) ||

blacklist.isExecutorBlacklistedForTaskSet(execId)

}

if (!isZombie && !offerBlacklisted) {

val curTime = clock.getTimeMillis()

var allowedLocality = maxLocality

//如果资源有Locality特征

if (maxLocality != TaskLocality.NO_PREF) {

//获取当前任务集允许执行的Locality,getAllowedLocalityLevel随时间变化而变化

allowedLocality = getAllowedLocalityLevel(curTime)

//如果允许的Locality级别低于maxLocality,则使用maxLocality覆盖允许的Locality

if (allowedLocality > maxLocality) {

// We’re not allowed to search for farther-away tasks

allowedLocality = maxLocality

}

}

dequeueTask(execId, host, allowedLocality).map { case ((index, taskLocality, speculative)) =>

//更新相关信息,并对任务序列化

val serializedTask: ByteBuffer = try {

ser.serialize(task)

} catch {

}

if (serializedTask.limit() > TaskSetManager.TASK_SIZE_TO_WARN_KB * 1024 &&

!emittedTaskSizeWarning) {

emittedTaskSizeWarning = true

logWarning(s"Stage ${task.stageId} contains a task of very large size " +

s"(${serializedTask.limit() / 1024} KB). The maximum recommended task size is " +

s"${TaskSetManager.TASK_SIZE_TO_WARN_KB} KB.")

}

//把该任务加入到运行任务列表中

addRunningTask(taskId)

}

} else {

None

}

}

最后是TaskSetManager.getAllowedLocalityLevel方法获取当前任务集允许执行的数据本地性实现。在

该方法从上次获得的数据本地性开始,根据优先级从高到低判断是否存在任务需要运行:如果对于其中一级数据本地性没有存在需要运行的任务,则不进行延时等待,而是进行下一阶段数据本地性处理;如果存在需要运行的任务,但延迟时间超过了该数据本地性设置的延迟时间,那么也进行下一阶段数据本地性处理;如果不满足前面两种情况,则返回数据本地性。代码实现如下:

private def getAllowedLocalityLevel(curTime: Long): TaskLocality.TaskLocality = {

//正在运行任务copiesRunning和成功运行任务successful两个中检查是否包含指定任务,

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值