概述
看这篇文章,可以跳着看,遇到哪个方法不懂就跳到哪个方法去看,上下对照着理解。
常量
companion object {
// A symbol to mark workers that are not in parkedWorkersStack
@JvmField
val NOT_IN_STACK = Symbol("NOT_IN_STACK")
// Worker ctl states
private const val PARKED = -1 // 在parkedWorkersStack栈上
private const val CLAIMED = 0 // 初始状态
private const val TERMINATED = 1 // 终止状态
// Masks of control state
/**
* |------------------------------------|---------------------|----------------------|
* | cpu permit(22bit) | BLOCKING(21bit) | CREATED(21bit) |
* |------------------------------------|---------------------|----------------------|
*/
private const val BLOCKING_SHIFT = 21 // 2M threads max
private const val CREATED_MASK: Long = (1L shl BLOCKING_SHIFT) - 1
private const val BLOCKING_MASK: Long = CREATED_MASK shl BLOCKING_SHIFT
private const val CPU_PERMITS_SHIFT = BLOCKING_SHIFT * 2
private const val CPU_PERMITS_MASK = CREATED_MASK shl CPU_PERMITS_SHIFT
internal const val MIN_SUPPORTED_POOL_SIZE = 1 // we support 1 for test purposes, but it is not usually used
internal const val MAX_SUPPORTED_POOL_SIZE = (1 shl BLOCKING_SHIFT) - 2
// Masks of parkedWorkersStack
/**
* |------------------------------------|---------------------|
* | PARKED_VERSION(43bit) | PARKED_INDEX(21bit)|
* |------------------------------------|---------------------|
*/
private const val PARKED_INDEX_MASK = CREATED_MASK
private const val PARKED_VERSION_MASK = CREATED_MASK.inv()
private const val PARKED_VERSION_INC = 1L shl BLOCKING_SHIFT
}
理论上最大支持2M的线程数量
CoroutineScheduler
CoroutineScheduler
实现了java的Executor
接口,是Dispatchers.Default
的调度器,它的调度逻辑就是,IO任务到IO线程中执行,CPU密集任务CPU型线程中去执行,对于调度的任务来说,什么io线程,什么cpu型线程,都是透明的,对任务来说都是线程。
线程池里面的线程名字叫做 DefaultDispatcher-worker-1,DefaultDispatcher-worker-2,DefaultDispatcher-worker-3
等之类的。
public actual object Dispatchers {
@JvmStatic
public actual val Default: CoroutineDispatcher = DefaultScheduler
}
internal val DEFAULT_SCHEDULER_NAME = systemProp(
"kotlinx.coroutines.scheduler.default.name", "DefaultDispatcher"
)
internal object DefaultScheduler : SchedulerCoroutineDispatcher(
CORE_POOL_SIZE, MAX_POOL_SIZE,
// DEFAULT_SCHEDULER_NAME的值默认就是 "DefaultDispatcher"
IDLE_WORKER_KEEP_ALIVE_NS, DEFAULT_SCHEDULER_NAME
) {
......
}
internal open class SchedulerCoroutineDispatcher(
private val corePoolSize: Int = CORE_POOL_SIZE,
private val maxPoolSize: Int = MAX_POOL_SIZE,
private val idleWorkerKeepAliveNs: Long = IDLE_WORKER_KEEP_ALIVE_NS,
// schedulerName的值默认就是 "DefaultDispatcher"
private val schedulerName: String = "CoroutineScheduler",
) : ExecutorCoroutineDispatcher() {
override val executor: Executor
get() = coroutineScheduler
// This is variable for test purposes, so that we can reinitialize from clean state
private var coroutineScheduler = createScheduler()
private fun createScheduler() =
// 使用 CoroutineScheduler 作为调度器
CoroutineScheduler(corePoolSize, maxPoolSize, idleWorkerKeepAliveNs, schedulerName)
override fun dispatch(context: CoroutineContext, block: Runnable): Unit = coroutineScheduler.dispatch(block)
override fun dispatchYield(context: CoroutineContext, block: Runnable): Unit =
coroutineScheduler.dispatch(block, tailDispatch = true)
internal fun dispatchWithContext(block: Runnable, context: TaskContext, tailDispatch: Boolean) {
coroutineScheduler.dispatch(block, context, tailDispatch)
}
....
}
internal class CoroutineScheduler(
// 核心线程数量,保证核心线程的存活,核心线程干cpu密集型任务,io任务(Blocking)不能占有许可(会主动释放)
@JvmField val corePoolSize: Int,
// 最大线程数量
@JvmField val maxPoolSize: Int,
// 空闲线程可以存活的时间,大于corePoolSize的数量的线程闲下来时都是空闲线程
@JvmField val idleWorkerKeepAliveNs: Long = IDLE_WORKER_KEEP_ALIVE_NS,
// 调度器的名字
@JvmField val schedulerName: String = DEFAULT_SCHEDULER_NAME
) : Executor, Closeable {
// CPU密集型任务队列
@JvmField
val globalCpuQueue = GlobalQueue()
// IO型任务队列
@JvmField
val globalBlockingQueue = GlobalQueue()
// 处于parked状态的线程都在这个栈上,这个值是栈顶的值,这个值分成2部分,PARKED_INDEX表示一个workers数组索引位置
// 通过这个索引可以在workers数组中找到这个worker,另一部分就是版本号,记录被修改的次数
/**
* |------------------------------------|----------------------|
* | PARKED_VERSION(43bit) | PARKED_INDEX(21bit) |
* |------------------------------------|--------------------——|
*/
private val parkedWorkersStack = atomic(0L)
// 存储线程的数组,workers[0]这个位置不用,是null。从1开始才是有效的worker
@JvmField
val workers = ResizableAtomicArray<Worker>((corePoolSize + 1) * 2)
/**
* |------------------------------------|---------------------|----------------------|
* | cpu permit(22bit) | BLOCKING(21bit) | CREATED(21bit) |
* |------------------------------------|---------------------|----------------------|
*/
// 也就是说这里的cpu permit初始值是corePoolSize
private val controlState = atomic(corePoolSize.toLong() shl CPU_PERMITS_SHIFT)
// 当前创建的线程数量
private val createdWorkers: Int inline get() = (controlState.value and CREATED_MASK).toInt()
// cpu permit的数量,初始值是corePoolSize
private val availableCpuPermits: Int inline get() = availableCpuPermits(controlState.value)
// 计算当前创建的线程数量
private inline fun createdWorkers(state: Long): Int = (state and CREATED_MASK).toInt()
// 计算当前阻塞的线程数量
private inline fun blockingTasks(state: Long): Int = (state and BLOCKING_MASK shr BLOCKING_SHIFT).toInt()
inline fun availableCpuPermits(state: Long): Int = (state and CPU_PERMITS_MASK shr CPU_PERMITS_SHIFT).toInt()
.....
}
先来理解下parkedWorkersStack
是什么样的,描述的是一种栈结构类型,top
表示栈顶。通过nextParkedWorker
指向下一个worker,栈低的那个worker的nextParkedWorker
指向的是数组第0项,即null。
parkedWorkersStackPush
这个方法的所用就是将当前worker
的nextParkedWorker
指向上一个处于PARKED
的worker
。注意这里没有修改worker的状态。
fun parkedWorkersStackPush(worker: Worker): Boolean {
// nextParkedWorker初始值就是NOT_IN_STACK,
if (worker.nextParkedWorker !== NOT_IN_STACK) return false // already in stack, bail out
/*
* The below loop can be entered only if this worker was not in the stack and, since no other thread
* can add it to the stack (only the worker itself), this invariant holds while this loop executes.
*/
// top表示栈顶
parkedWorkersStack.loop { top ->
// 第一次top的值是0
val index = (top and PARKED_INDEX_MASK).toInt()
val updVersion = (top + PARKED_VERSION_INC) and PARKED_VERSION_MASK
// Worker的indexInArray是从1开始的
val updIndex = worker.indexInArray
assert { updIndex != 0 } // only this worker can push itself, cannot be terminated
// nextParkedWorker 指向的是下一个处于parked的worker,但是第一次index是0,即nextParkedWorker指向null
worker.nextParkedWorker = workers[index]
/*
* Other thread can be changing this worker's index at this point, but it
* also invokes parkedWorkersStackTopUpdate which updates version to make next CAS fail.
* Successful CAS of the stack top completes successful push.
*/
// 更新值,如果下一次worker的 nextParkedWorker就会指向刚刚的worker
if (parkedWorkersStack.compareAndSet(top, updVersion or updIndex.toLong())) return true
}
}
parkedWorkersStackPop
从栈顶弹出一个worker
private fun parkedWorkersStackPop(): Worker? {
parkedWorkersStack.loop { top ->
// 数组中index
val index = (top and PARKED_INDEX_MASK).toInt()
val worker = workers[index] ?: return null // stack is empty
val updVersion = (top + PARKED_VERSION_INC) and PARKED_VERSION_MASK
// 下一个worker
val updIndex = parkedWorkersStackNextIndex(worker)
// updIndex < 0表示可能被其他线程修改了,继续loop循环
if (updIndex < 0) return@loop // retry
/*
* Other thread can be changing this worker's index at this point, but it
* also invokes parkedWorkersStackTopUpdate which updates version to make next CAS fail.
* Successful CAS of the stack top completes successful pop.
*/
// 修改栈顶
if (parkedWorkersStack.compareAndSet(top, updVersion or updIndex.toLong())) {
/*
* We've just took worker out of the stack, but nextParkerWorker is not reset yet, so if a worker is
* currently invoking parkedWorkersStackPush it would think it is in the stack and bail out without
* adding itself again. It does not matter, since we are going it invoke unpark on the thread
* that was popped out of parkedWorkersStack anyway.
*/
// 修改成功,将worker的nextParkedWorker重新改为NOT_IN_STACK
worker.nextParkedWorker = NOT_IN_STACK
return worker
}
}
}
// 找到下一个Park的线程处于数组中的位置
private fun parkedWorkersStackNextIndex(worker: Worker): Int {
var next = worker.nextParkedWorker
findNext@ while (true) {
when {
next === NOT_IN_STACK -> return -1 // we are too late -- other thread popped this element, retry
// 说明栈已经空了
next === null -> return 0 // stack becomes empty
else -> {
val nextWorker = next as Worker
// nextWorker在数组中的位置
val updIndex = nextWorker.indexInArray
// Worker的index是从1开始的
if (updIndex != 0) return updIndex // found good index for next worker
// Otherwise, this worker was terminated and we cannot put it to top anymore, check next
// nextWorker可能已经处于终止态了,找下一个
next = nextWorker.nextParkedWorker
}
}
}
}
parkedWorkersStackTopUpdate
fun parkedWorkersStackTopUpdate(worker: Worker, oldIndex: Int, newIndex: Int) {
parkedWorkersStack.loop { top ->
val index = (top and PARKED_INDEX_MASK).toInt()
val updVersion = (top + PARKED_VERSION_INC) and PARKED_VERSION_MASK
val updIndex = if (index == oldIndex) {
if (newIndex == 0) {
parkedWorkersStackNextIndex(worker)
} else {
newIndex
}
} else {
index // no change to index, but update version
}
if (updIndex < 0) return@loop // retry
if (parkedWorkersStack.compareAndSet(top, updVersion or updIndex.toLong())) return
}
}
Worker
看到出Worker就是Thread。处于WorkerState.TERMINATED
状态的Worker不在接收和处理任务,迎接它的将会是死亡。
internal inner class Worker private constructor() : Thread() {
@Volatile // volatile for push/pop operation into parkedWorkersStack
var indexInArray = 0
set(index) {
// schedulerName默认值是"DefaultDispatcher"
// 线程的名字就是DefaultDispatcher-worker-1,DefaultDispatcher-worker-2,DefaultDispatcher-worker-3等之类的
name = "$schedulerName-worker-${if (index == 0) "TERMINATED" else index.toString()}"
field = index
}
// index的值是从1开始的
constructor(index: Int) : this() {
indexInArray = index
}
// scheduler指向CoroutineScheduler对象
inline val scheduler get() = this@CoroutineScheduler
// 本地队列,每个Worker都有,私有的,可以存io型任务,也可以存cpu型任务
@JvmField
val localQueue: WorkQueue = WorkQueue()
// Worker干完事后,还可以从其他Worker的localQueue中偷取任务,那么这个偷来的任务就会存在stolenTask中
private val stolenTask: ObjectRef<Task?> = ObjectRef()
// DORMANT初始值
@JvmField
var state = WorkerState.DORMANT
val workerCtl = atomic(CLAIMED)
private var terminationDeadline = 0L
// 指向的是下一个处于PARKED状态的Worker
@Volatile
var nextParkedWorker: Any? = NOT_IN_STACK
// 偷取任务时,没有偷成功,那就暂停一会,暂停的时间就是minDelayUntilStealableTaskNs
private var minDelayUntilStealableTaskNs = 0L
// 表示本地队列localQueue中是否有任务
@JvmField
var mayHaveLocalTasks = false
.....
}
// 表示Worker的状态
enum class WorkerState {
// 获取到cpu权限
CPU_ACQUIRED,
// 正在执行IO阻塞任务
BLOCKING,
// 已处理完所有任务,线程挂起
PARKING,
// 初始态
DORMANT,
// 终止状态,將不再被使用
TERMINATED
}
BLOCKING指的是Worker正在执行一个IO密集型的coroutine,此时它已经释放了CPU的许可,cpu的许可数量是corePoolSize。同时还会唤醒其他线程,让唤醒的线程去尽可能的利用cpu执行任务。
PARKING状态,指的是线程已经完成了所有任务,但暂时还没有销毁的必要,所以先挂起。
submitToLocalQueue
将任务提交到Worker的本地队列中,tailDispatch如果为true,就表示添加到队列中的操作是一个公平的操作,即进入队列就要排队,如果是false表示可以插队。
WorkQueue中lastScheduledTask可以保存新插进来的任务,Worker从本地队列中取任务时,如果lastScheduledTask中有任务,直接拿出来,这就是插队的原理。
submitToLocalQueue方法的返回值如果是非null,表示提交到本地队列失败了,失败的原因可能是还没有创建Worker。
private fun Worker?.submitToLocalQueue(task: Task, tailDispatch: Boolean): Task? {
// 没有创建Woker对象,添加失败
if (this == null) return task
/*
* This worker could have been already terminated from this thread by close/shutdown and it should not
* accept any more tasks into its local queue.
*/
// Worker处于终止状态,不再接收任务
if (state === WorkerState.TERMINATED) return task
// Do not add CPU tasks in local queue if we are not able to execute it
if (!task.isBlocking && state === WorkerState.BLOCKING) {
// Worker处于BLOCKING,任务又是非BLOCKING的,就不要往线程本地队列中提交了
// 放到全局队列中,给其他线程处理
return task
}
// 记录下,本地队列中有任务
mayHaveLocalTasks = true
return localQueue.add(task, fair = tailDispatch)
}
Permit
private inline fun tryAcquireCpuPermit(): Boolean = controlState.loop { state ->
// 当前cpu许可还剩多少,一开始有corePoolSize个许可
val available = availableCpuPermits(state)
// 没有,返回false
if (available == 0) return false
// 还有剩的,减去1
val update = state - (1L shl CPU_PERMITS_SHIFT)
// 更新状态,更新失败,重新loop
if (controlState.compareAndSet(state, update)) return true
}
private fun tryAcquireCpuPermit(): Boolean = when {
// 已经获得了许可,返回true
state == WorkerState.CPU_ACQUIRED -> true
// 获取许可
this@CoroutineScheduler.tryAcquireCpuPermit() -> {
// 拿到许可后,修改Worker的state
state = WorkerState.CPU_ACQUIRED
true
}
// 无法获取许可
else -> false
}
findTask
fun findTask(mayHaveLocalTasks: Boolean): Task? {
// 先获得cpu许可,再去获取任务
if (tryAcquireCpuPermit()) return findAnyTask(mayHaveLocalTasks)
/*
* If we can't acquire a CPU permit, attempt to find blocking task:
* - Check if our queue has one (maybe mixed in with CPU tasks)
* - Poll global and try steal
*/
// 没有获得许可,也就是说现在核心线程都在干活,当前线程就找一个io任务来处理
// 没有拿到许可的线程只能处理io任务,因为io任务容易阻塞,你可不想让核心线程一会执行一会阻塞吧?
return findBlockingTask()
}
// NB: ONLY for runSingleTask method
private fun findBlockingTask(): Task? {
// 先从本地队列中弹出任务,本地没有,再去全局阻塞队列中找,全局阻塞队列中也没有,就去偷其他Worker中的本地任务
return localQueue.pollBlocking()
?: globalBlockingQueue.removeFirstOrNull()
?: trySteal(STEAL_BLOCKING_ONLY)
}
private fun findAnyTask(scanLocalQueue: Boolean): Task?
// scanLocalQueue表示是否扫描本地队列
if (scanLocalQueue) {
// 基于Marsaglia xorshift RNG算法
// 生成2^32-1范围內的随机数值
// globalFirst表示要不要先从全局队列中取任务
val globalFirst = nextInt(2 * corePoolSize) == 0
if (globalFirst) pollGlobalQueues()?.let { return it }
// 不是从全局队列中取,那么就从本地队列中取任务
localQueue.poll()?.let { return it }
// 本地队列都已经没有任务了,还是从全局队列中取
if (!globalFirst) pollGlobalQueues()?.let { return it }
} else {
// 本地队列没有任务,直接从全局队列里找
pollGlobalQueues()?.let { return it }
}
// 本地队列没有,全局队列也没有,就偷别的Worker中的任务
return trySteal(STEAL_ANY)
}
private fun trySteal(stealingMode: StealingMode): Task? {
// 当前创建的线程数
val created = createdWorkers
// 0 to await an initialization and 1 to avoid excess stealing on single-core machines
if (created < 2) {
// 只有一个线程,没有可偷的
return null
}
//
// 下一个随机值
var currentIndex = nextInt(created)
var minDelay = Long.MAX_VALUE
// 把当前创建的线程循环遍历一次,除了自己这个线程外,看看能不能从其他Worker中偷任务
repeat(created) {
++currentIndex
if (currentIndex > created) currentIndex = 1
val worker = workers[currentIndex]
if (worker !== null && worker !== this) {
// 偷取别的Woker中的本地任务
val stealResult = worker.localQueue.trySteal(stealingMode, stolenTask)
if (stealResult == TASK_STOLEN) {
// 已经偷到了,任务就是stolenTask.element
val result = stolenTask.element
stolenTask.element = null
return result
} else if (stealResult > 0) {
// 没有偷到,stealResult>0表示当前线程可以park的时间
// 取最小值
minDelay = min(minDelay, stealResult)
}
}
}
// 下次偷任务的时间,意思就是给自己设置一个闹钟,时间到了再去偷
minDelayUntilStealableTaskNs = if (minDelay != Long.MAX_VALUE) minDelay else 0
return null
}
runWorker
Worker就是线程,线程运行的时候就会执行run方法
override fun run() = runWorker()
private fun runWorker() {
var rescanned = false
// Worker没有终止就不停地处理任务
while (!isTerminated && state != WorkerState.TERMINATED) {
// 获取任务
val task = findTask(mayHaveLocalTasks)
// Task found. Execute and repeat
if (task != null) {
// 任务找到了,不需要重新扫描了
rescanned = false
// 也不需要休眠了
minDelayUntilStealableTaskNs = 0L
// 执行任务
executeTask(task)
continue
} else {
// 什么任务都找不到,本地队列也就没有任务了
mayHaveLocalTasks = false
}
if (minDelayUntilStealableTaskNs != 0L) {
if (!rescanned) {
rescanned = true
} else {
rescanned = false
// 释放许可,Worker的状态变成PARKING
tryReleaseCpu(WorkerState.PARKING)
// 清除中断标记
interrupted()
// 线程park一段时间
LockSupport.parkNanos(minDelayUntilStealableTaskNs)
//
minDelayUntilStealableTaskNs = 0L
}
continue
}
// 很长时间没有执行任务,没有在idleWorkerKeepAliveNs时间内醒来,在保证核心线程数量的同时,其他的线程即将迎来死亡
tryPark()
}
// 释放许可,Worker的状态改为TERMINATED
tryReleaseCpu(WorkerState.TERMINATED)
}
private fun tryPark() {
if (!inStack()) {
// 长时间没有任务,先进入park栈中
parkedWorkersStackPush(this)
return
}
//
workerCtl.value = PARKED // Update value once
/*
* inStack() prevents spurious wakeups, while workerCtl.value == PARKED
* prevents the following race:
*
* - T2 scans the queue, adds itself to the stack, goes to rescan
* - T2 suspends in 'workerCtl.value = PARKED' line
* - T1 pops T2 from the stack, claims workerCtl, suspends
* - T2 fails 'while (inStack())' check, goes to full rescan
* - T2 adds itself to the stack, parks
* - T1 unparks T2, bails out with success
* - T2 unparks and loops in 'while (inStack())'
*/
while (inStack() && workerCtl.value == PARKED) { // Prevent spurious wakeups
if (isTerminated || state == WorkerState.TERMINATED) break
// 进入park栈了,修改worker的状态
tryReleaseCpu(WorkerState.PARKING)
interrupted() // Cleanup interruptions
// 要是idleWorkerKeepAliveNs时间内没有唤醒,可能就要转变成终止态了,接下来就是死亡了
park()
}
}
private fun inStack(): Boolean = nextParkedWorker !== NOT_IN_STACK
park
private fun park() {
// set termination deadline the first time we are here (it is reset in idleReset)
if (terminationDeadline == 0L) terminationDeadline = System.nanoTime() + idleWorkerKeepAliveNs
// actually park
LockSupport.parkNanos(idleWorkerKeepAliveNs)
// try terminate when we are idle past termination deadline
// note that comparison is written like this to protect against potential nanoTime wraparound
if (System.nanoTime() - terminationDeadline >= 0) {
terminationDeadline = 0L // if attempt to terminate worker fails we'd extend deadline again
tryTerminateWorker()
}
}
tryTerminateWorker
/**
* Stops execution of current thread and removes it from [createdWorkers].
*/
private fun tryTerminateWorker() {
synchronized(workers) {
// Make sure we're not trying race with termination of scheduler
if (isTerminated) return
// Someone else terminated, bail out
// 保持核心线程存活
if (createdWorkers <= corePoolSize) return
/*
* See tryUnpark for state reasoning.
* If this CAS fails, then we were successfully unparked by other worker and cannot terminate.
*/
if (!workerCtl.compareAndSet(PARKED, TERMINATED)) return
/*
* At this point this thread is no longer considered as usable for scheduling.
* We need multi-step choreography to reindex workers.
*
* 1) Read current worker's index and reset it to zero.
*/
// 进入终止态
val oldIndex = indexInArray
indexInArray = 0
/*
* Now this worker cannot become the top of parkedWorkersStack, but it can
* still be at the stack top via oldIndex.
*
* 2) Update top of stack if it was pointing to oldIndex and make sure no
* pending push/pop operation that might have already retrieved oldIndex could complete.
*/
parkedWorkersStackTopUpdate(this, oldIndex, 0)
/*
* 3) Move last worker into an index in array that was previously occupied by this worker,
* if last worker was a different one (sic!).
*/
// 创建的线程数量减去1
val lastIndex = decrementCreatedWorkers()
if (lastIndex != oldIndex) {
val lastWorker = workers[lastIndex]!!
workers.setSynchronized(oldIndex, lastWorker)
lastWorker.indexInArray = oldIndex
/*
* Now lastWorker is available at both indices in the array, but it can
* still be at the stack top on via its lastIndex
*
* 4) Update top of stack lastIndex -> oldIndex and make sure no
* pending push/pop operation that might have already retrieved lastIndex could complete.
*/
parkedWorkersStackTopUpdate(lastWorker, lastIndex, oldIndex)
}
/*
* 5) It is safe to clear reference from workers array now.
*/
// 删除数组中的worker
workers.setSynchronized(lastIndex, null)
}
state = WorkerState.TERMINATED
}
executeTask
private fun executeTask(task: Task) {
terminationDeadline = 0L // reset deadline for termination
// 如果处于PARKING状态的,醒来后,发现还是PARKING状态。说明什么?
// 如果拿到cpu许可,状态就是WorkerState.CPU_ACQUIRED
// 如果拿不到许可,只能执行IO任务,这里与findTask方法对应
// 所以这里判断如果是PARKING状态,直接设置成BLOCKING状态,表示要执行io任务了
if (state == WorkerState.PARKING) {
assert { task.isBlocking }
state = WorkerState.BLOCKING
}
// io型任务会释放cpu许可
if (task.isBlocking) {
// Always notify about new work when releasing CPU-permit to execute some blocking task
// 如果是io任务,尝试释放许可,为什么这样做呢?
// 因为,拿到许可得线程应该尽量让cpu运行起来,不要阻塞
// 这里释放许可,然后这样worker的状态变成BLOCKING,唤醒其他阻塞的线程,尽可能的利用cpu
if (tryReleaseCpu(WorkerState.BLOCKING)) {
// 唤醒其他线程或者创建新的线程来执行其他任务
signalCpuWork()
}
// 自己主动释放许可,有一种舍身成仁的奉献精神
// 执行io任务
runSafely(task)
decrementBlockingTasks()
val currentState = state
// Shutdown sequence of blocking dispatcher
if (currentState !== WorkerState.TERMINATED) {
assert { currentState == WorkerState.BLOCKING } // "Expected BLOCKING state, but has $currentState"
state = WorkerState.DORMANT
}
} else {
runSafely(task)
}
}
dispatch
有了上面的知识,我们看下CoroutineScheduler
的dispatch
方法,CoroutineScheduler
实现了java的Executor
接口,执行任务时实际执行的是dispatch方法
override fun execute(command: Runnable) = dispatch(command)
先将Runnable
转换成Task
,里面的submissionTime
是我们提交到队列中的时间,后面其他线程偷任务时,用来计算偷任务那个时刻的时间与submissionTime
之间的差值,如果值太小,是不许偷这个任务的,因为这个任务刚刚放到队列中就要被偷,干嘛不直接放到小偷Worker那个线程中呢。
fun createTask(block: Runnable, taskContext: TaskContext): Task {
// 当前时间
val nanoTime = schedulerTimeSource.nanoTime()
if (block is Task) {
// 更新时间
block.submissionTime = nanoTime
// 表示什么类型的任务
block.taskContext = taskContext
return block
}
return block.asTask(nanoTime, taskContext)
}
tailDispatch
默认值false,含义参考上面的submitToLocalQueue
方法讲解
fun dispatch(block: Runnable, taskContext: TaskContext = NonBlockingContext, tailDispatch: Boolean = false) {
trackTask() // this is needed for virtual time support
// 包装成Task,因为task还包含了任务类型(io型或者CPU型),还有提交任务的时间(给其他线程偷取这个准备的)
val task = createTask(block, taskContext)
// 是不是IO任务
val isBlockingTask = task.isBlocking
// Invariant: we increment counter **before** publishing the task
// so executing thread can safely decrement the number of blocking tasks
//如果任务时阻塞类型的,更新controlState中阻塞任务的值
val stateSnapshot = if (isBlockingTask) incrementBlockingTasks() else 0
// try to submit the task to the local queue and act depending on the result
// 当前的worker线程
val currentWorker = currentWorker()
// 添加到Worker的本地队列中,没有添加成功返回task,添加成功返回null
val notAdded = currentWorker.submitToLocalQueue(task, tailDispatch)
if (notAdded != null) {
// 没有添加成功,往全局队列中添加
if (!addToGlobalQueue(notAdded)) {
// Global queue is closed in the last step of close/shutdown -- no more tasks should be accepted
throw RejectedExecutionException("$schedulerName was terminated")
}
}
// tailDispatch默认false
// 第一次肯定是null,如果不为null,表示当前有线程了,不需要唤醒,因为,线程自己会醒来
val skipUnpark = tailDispatch && currentWorker != null
// Checking 'task' instead of 'notAdded' is completely okay
if (isBlockingTask) {
// Use state snapshot to better estimate the number of running threads
signalBlockingWork(stateSnapshot, skipUnpark = skipUnpark)
} else {
if (skipUnpark) return
// 我们就当做是第一次运行,currentWorker == null
signalCpuWork()
}
}
signalCpuWork
fun signalCpuWork() {
if (tryUnpark()) return
// 还没有处于PARKED状态的worker,那么就尝试创建
if (tryCreateWorker()) return
// 线程创建失败,可能是线程创建过多,尝试唤醒其他线程
tryUnpark()
}
private fun tryUnpark(): Boolean {
while (true) {
// 先弹出一个worker
val worker = parkedWorkersStackPop() ?: return false
if (worker.workerCtl.compareAndSet(PARKED, CLAIMED)) {
// 唤醒线程
LockSupport.unpark(worker)
return true
}
}
}
tryCreateWorker
private fun tryCreateWorker(state: Long = controlState.value): Boolean {
val created = createdWorkers(state)
val blocking = blockingTasks(state)
val cpuWorkers = (created - blocking).coerceAtLeast(0)
/*
* We check how many threads are there to handle non-blocking work,
* and create one more if we have not enough of them.
*/
if (cpuWorkers < corePoolSize) {
val newCpuWorkers = createNewWorker()
// If we've created the first cpu worker and corePoolSize > 1 then create
// one more (second) cpu worker, so that stealing between them is operational
// 只有一个线程干活,并且允许线程数量大于1,那就在创建一个线程,用来偷任务执行
if (newCpuWorkers == 1 && corePoolSize > 1) createNewWorker()
// 创建线程成功
if (newCpuWorkers > 0) return true
}
return false
}
private fun createNewWorker(): Int {
val worker: Worker
// 同步块
return synchronized(workers) {
// Make sure we're not trying to resurrect terminated scheduler
if (isTerminated) return -1
//
val state = controlState.value
// 创建线程的数量
val created = createdWorkers(state)
// 阻塞任务的数量
val blocking = blockingTasks(state)
// 正在干活的线程数量
val cpuWorkers = (created - blocking).coerceAtLeast(0)
// Double check for overprovision
// 干活的线程数量不能超过corePoolSize
if (cpuWorkers >= corePoolSize) return 0
// 线程数量的最大值不能超过maxPoolSize
if (created >= maxPoolSize) return 0
// start & register new worker, commit index only after successful creation
// 从1开始
val newIndex = createdWorkers + 1
require(newIndex > 0 && workers[newIndex] == null)
/*
* 1) Claim the slot (under a lock) by the newly created worker
* 2) Make it observable by increment created workers count
* 3) Only then start the worker, otherwise it may miss its own creation
*/
// 创建线程
worker = Worker(newIndex)
// 将这个worker线程放到数组中
workers.setSynchronized(newIndex, worker)
require(newIndex == incrementCreatedWorkers())
// 返回干活的线程数量
cpuWorkers + 1
// 直接启动
}.also { worker.start() } // Start worker when the lock is released to reduce contention, see #3652
}