一、AQS
1.AQS 的核心目标
解决同步工具的共性问题:
同步状态管理:如何定义和修改共享资源的状态(如锁的持有、信号量的许可数)。
线程阻塞与唤醒:当线程获取资源失败时,如何安全地阻塞线程;当资源释放时,如何唤醒等待的线程。
等待队列管理:如何维护等待资源的线程队列,保证线程按公平 / 非公平策略获取资源。
AQS 通过模板方法模式将这些共性逻辑抽象出来,同步工具只需重写特定方法(如获取 / 释放状态)即可快速实现。
2.AQS 的核心结构
AQS 内部有三个核心成员:
2.1.同步状态(state)
一个 volatile int 类型的变量,用于表示共享资源的状态(具体含义由子类定),例如:
ReentrantLock中,state=0表示锁未被持有,state>0表示锁被持有(值为重入次数);Semaphore中,state表示可用的许可数量;CountDownLatch中,state表示倒计时的计数器值。
对 state 的修改通过 CAS 操作保证原子性(compareAndSetState 方法),确保多线程下的状态一致性
2. 2.双向阻塞队列(等待队列),类似于 Monitor 的 EntryList
- 当线程获取资源失败时,会被包装成节点(
Node) 加入队列,进入阻塞状态; - 队列是双向链表,每个节点包含:
- 线程引用(
thread):当前等待的线程; - 等待状态(
waitStatus):表示节点的状态(如CANCELLED已取消、SIGNAL等待唤醒等); - 前驱节点(
prev)和后继节点(next):维护链表结构。
- 线程引用(
- 队列的头节点(
head)是 “哨兵节点”(不关联线程),用于简化队列操作;尾节点(tail)指向最后一个等待节点。
2.3.条件变量来实现等待、唤醒机制,支持多个条件变量,类似于 Monitor 的 WaitSet
2.4.提供了一些模版方法去实现
tryAcquire、tryReleas、tryAcquireShared、tryReleaseShared、isHeldExclusively
2.5.使用AQS自定义不可重入锁
package org.example.n8;
import lombok.extern.slf4j.Slf4j;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.AbstractQueuedSynchronizer;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
@Slf4j(topic = "c.TestAqs")
public class TestAqs {
public static void main(String[] args) {
MyLock lock = new MyLock();
// testLock(lock); // 测试加锁
// testUnreentryLock(lock); // 测试不可重入锁
// testInterrupt(lock); // 测试可中断锁
testTryLock(lock);
}
private static void testTryLock(MyLock lock) {
new Thread(() -> {
lock.lock();
try {
log.debug("locking...");
TimeUnit.SECONDS.sleep(2);
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
log.debug("unlocking...");
lock.unlock();
}
},"t1").start();
new Thread(() -> {
try {
if (lock.tryLock(1000, TimeUnit.MILLISECONDS)) {
try {
log.debug("locking...");
}finally {
log.debug("unlocking...");
lock.unlock();
}
}else{
log.debug("获取锁失败...");
}
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
},"t2").start();
}
private static void testInterrupt(MyLock lock) {
new Thread(() -> {
lock.lock();
try {
log.debug("locking...");
TimeUnit.SECONDS.sleep(2);
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
log.debug("unlocking...");
lock.unlock();
}
},"t1").start();
Thread t2 = new Thread(() -> {
try {
lock.lockInterruptibly();
try {
log.debug("locking...");
} finally {
log.debug("unlocking...");
lock.unlock();
}
} catch (InterruptedException e) {
log.debug(Thread.currentThread().getName() + "线程在获取锁的时候被打断...");
}
}, "t2");
t2.start();
t2.interrupt();
}
private static void testUnreentryLock(MyLock lock) {
new Thread(() -> {
lock.lock();
log.debug("第一次加锁...");
lock.lock();
log.debug("第二次加锁...");
try {
log.debug("locking...");
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
log.debug("unlocking...");
lock.unlock();
}
},"t1").start();
}
private static void testLock(MyLock lock) {
new Thread(() -> {
lock.lock();
try {
log.debug("locking...");
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
log.debug("unlocking...");
lock.unlock();
}
},"t1").start();
new Thread(() -> {
lock.lock();
try {
log.debug("locking...");
}finally {
log.debug("unlocking...");
lock.unlock();
}
},"t2").start();
}
}
// 自定义锁(不可重入锁)
class MyLock implements Lock {
// 独占锁 同步类
class MySync extends AbstractQueuedSynchronizer {
@Override
protected boolean tryAcquire(int arg) {
if(compareAndSetState(0, 1)){
// 加锁,并设置 owner 为当前线程
setExclusiveOwnerThread(Thread.currentThread());
return true;
}
return false;
}
@Override
protected boolean tryRelease(int arg) {
// 下面设置owner线程和state为0的顺序不能乱,因为
// state变量是被volatile修饰的,可以保证之前的修改对其他线程可见
// 但是exclusiveOwnerThread变量是没有被volatile修饰的,所以不能保证之前的修改对其他线程可见
setExclusiveOwnerThread(null);
setState(0);
return true;
}
@Override // 判断当前线程是否持有锁
protected boolean isHeldExclusively() {
return getState() == 1;
}
public Condition newCondition() {
return new ConditionObject();
}
}
private MySync sync = new MySync();
@Override // 加锁(不成功进入等待队列)
public void lock() {
sync.acquire(1);
}
@Override // 加锁 可打断
public void lockInterruptibly() throws InterruptedException {
sync.acquireInterruptibly(1);
}
@Override // 尝试加锁(一次)
public boolean tryLock() {
return sync.tryAcquire(1);
}
@Override // 尝试加锁,带超时时间
public boolean tryLock(long time, TimeUnit unit) throws InterruptedException {
return sync.tryAcquireNanos(1, unit.toNanos(time));
}
@Override // 解锁
public void unlock() {
sync.release(1);
}
@Override // 创建条件变量
public Condition newCondition() {
return sync.newCondition();
}
}
3.ReentrantLock的实现原理
3.1.ReentrantLock的源码
/*
* ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*/
/*
*
*
*
*
*
* Written by Doug Lea with assistance from members of JCP JSR-166
* Expert Group and released to the public domain, as explained at
* http://creativecommons.org/publicdomain/zero/1.0/
*/
package java.util.concurrent.locks;
import java.util.Collection;
import java.util.concurrent.TimeUnit;
import jdk.internal.vm.annotation.ReservedStackAccess;
/**
* A reentrant mutual exclusion {@link Lock} with the same basic
* behavior and semantics as the implicit monitor lock accessed using
* {@code synchronized} methods and statements, but with extended
* capabilities.
*
* <p>A {@code ReentrantLock} is <em>owned</em> by the thread last
* successfully locking, but not yet unlocking it. A thread invoking
* {@code lock} will return, successfully acquiring the lock, when
* the lock is not owned by another thread. The method will return
* immediately if the current thread already owns the lock. This can
* be checked using methods {@link #isHeldByCurrentThread}, and {@link
* #getHoldCount}.
*
* <p>The constructor for this class accepts an optional
* <em>fairness</em> parameter. When set {@code true}, under
* contention, locks favor granting access to the longest-waiting
* thread. Otherwise this lock does not guarantee any particular
* access order. Programs using fair locks accessed by many threads
* may display lower overall throughput (i.e., are slower; often much
* slower) than those using the default setting, but have smaller
* variances in times to obtain locks and guarantee lack of
* starvation. Note however, that fairness of locks does not guarantee
* fairness of thread scheduling. Thus, one of many threads using a
* fair lock may obtain it multiple times in succession while other
* active threads are not progressing and not currently holding the
* lock.
* Also note that the untimed {@link #tryLock()} method does not
* honor the fairness setting. It will succeed if the lock
* is available even if other threads are waiting.
*
* <p>It is recommended practice to <em>always</em> immediately
* follow a call to {@code lock} with a {@code try} block, most
* typically in a before/after construction such as:
*
* <pre> {@code
* class X {
* private final ReentrantLock lock = new ReentrantLock();
* // ...
*
* public void m() {
* lock.lock(); // block until condition holds
* try {
* // ... method body
* } finally {
* lock.unlock();
* }
* }
* }}</pre>
*
* <p>In addition to implementing the {@link Lock} interface, this
* class defines a number of {@code public} and {@code protected}
* methods for inspecting the state of the lock. Some of these
* methods are only useful for instrumentation and monitoring.
*
* <p>Serialization of this class behaves in the same way as built-in
* locks: a deserialized lock is in the unlocked state, regardless of
* its state when serialized.
*
* <p>This lock supports a maximum of 2147483647 recursive locks by
* the same thread. Attempts to exceed this limit result in
* {@link Error} throws from locking methods.
*
* @since 1.5
* @author Doug Lea
*/
public class ReentrantLock implements Lock, java.io.Serializable {
private static final long serialVersionUID = 7373984872572414699L;
/** Synchronizer providing all implementation mechanics */
private final Sync sync;
/**
* Base of synchronization control for this lock. Subclassed
* into fair and nonfair versions below. Uses AQS state to
* represent the number of holds on the lock.
*/
abstract static class Sync extends AbstractQueuedSynchronizer {
private static final long serialVersionUID = -5179523762034025860L;
/**
* Performs non-fair tryLock.
*/
@ReservedStackAccess
final boolean tryLock() {
Thread current = Thread.currentThread();
int c = getState();
if (c == 0) {
if (compareAndSetState(0, 1)) {
setExclusiveOwnerThread(current);
return true;
}
} else if (getExclusiveOwnerThread() == current) {
if (++c < 0) // overflow
throw new Error("Maximum lock count exceeded");
setState(c);
return true;
}
return false;
}
/**
* Checks for reentrancy and acquires if lock immediately
* available under fair vs nonfair rules. Locking methods
* perform initialTryLock check before relaying to
* corresponding AQS acquire methods.
*/
abstract boolean initialTryLock();
@ReservedStackAccess
final void lock() {
if (!initialTryLock())
acquire(1);
}
@ReservedStackAccess
final void lockInterruptibly() throws InterruptedException {
if (Thread.interrupted())
throw new InterruptedException();
if (!initialTryLock())
acquireInterruptibly(1);
}
@ReservedStackAccess
final boolean tryLockNanos(long nanos) throws InterruptedException {
if (Thread.interrupted())
throw new InterruptedException();
return initialTryLock() || tryAcquireNanos(1, nanos);
}
@ReservedStackAccess
protected final boolean tryRelease(int releases) {
int c = getState() - releases;
if (getExclusiveOwnerThread() != Thread.currentThread())
throw new IllegalMonitorStateException();
boolean free = (c == 0);
if (free)
setExclusiveOwnerThread(null);
setState(c);
return free;
}
protected final boolean isHeldExclusively() {
// While we must in general read state before owner,
// we don't need to do so to check if current thread is owner
return getExclusiveOwnerThread() == Thread.currentThread();
}
final ConditionObject newCondition() {
return new ConditionObject();
}
// Methods relayed from outer class
final Thread getOwner() {
return getState() == 0 ? null : getExclusiveOwnerThread();
}
final int getHoldCount() {
return isHeldExclusively() ? getState() : 0;
}
final boolean isLocked() {
return getState() != 0;
}
/**
* Reconstitutes the instance from a stream (that is, deserializes it).
*/
private void readObject(java.io.ObjectInputStream s)
throws java.io.IOException, ClassNotFoundException {
s.defaultReadObject();
setState(0); // reset to unlocked state
}
}
/**
* Sync object for non-fair locks
*/
static final class NonfairSync extends Sync {
private static final long serialVersionUID = 7316153563782823691L;
final boolean initialTryLock() {
Thread current = Thread.currentThread();
if (compareAndSetState(0, 1)) { // first attempt is unguarded
setExclusiveOwnerThread(current);
return true;
} else if (getExclusiveOwnerThread() == current) {
int c = getState() + 1;
if (c < 0) // overflow
throw new Error("Maximum lock count exceeded");
setState(c);
return true;
} else
return false;
}
/**
* Acquire for non-reentrant cases after initialTryLock prescreen
*/
protected final boolean tryAcquire(int acquires) {
if (getState() == 0 && compareAndSetState(0, acquires)) {
setExclusiveOwnerThread(Thread.currentThread());
return true;
}
return false;
}
}
/**
* Sync object for fair locks
*/
static final class FairSync extends Sync {
private static final long serialVersionUID = -3000897897090466540L;
/**
* Acquires only if reentrant or queue is empty.
*/
final boolean initialTryLock() {
Thread current = Thread.currentThread();
int c = getState();
if (c == 0) {
if (!hasQueuedThreads() && compareAndSetState(0, 1)) {
setExclusiveOwnerThread(current);
return true;
}
} else if (getExclusiveOwnerThread() == current) {
if (++c < 0) // overflow
throw new Error("Maximum lock count exceeded");
setState(c);
return true;
}
return false;
}
/**
* Acquires only if thread is first waiter or empty
*/
protected final boolean tryAcquire(int acquires) {
if (getState() == 0 && !hasQueuedPredecessors() &&
compareAndSetState(0, acquires)) {
setExclusiveOwnerThread(Thread.currentThread());
return true;
}
return false;
}
}
/**
* Creates an instance of {@code ReentrantLock}.
* This is equivalent to using {@code ReentrantLock(false)}.
*/
public ReentrantLock() {
sync = new NonfairSync();
}
/**
* Creates an instance of {@code ReentrantLock} with the
* given fairness policy.
*
* @param fair {@code true} if this lock should use a fair ordering policy
*/
public ReentrantLock(boolean fair) {
sync = fair ? new FairSync() : new NonfairSync();
}
/**
* Acquires the lock.
*
* <p>Acquires the lock if it is not held by another thread and returns
* immediately, setting the lock hold count to one.
*
* <p>If the current thread already holds the lock then the hold
* count is incremented by one and the method returns immediately.
*
* <p>If the lock is held by another thread then the
* current thread becomes disabled for thread scheduling
* purposes and lies dormant until the lock has been acquired,
* at which time the lock hold count is set to one.
*/
public void lock() {
sync.lock();
}
/**
* Acquires the lock unless the current thread is
* {@linkplain Thread#interrupt interrupted}.
*
* <p>Acquires the lock if it is not held by another thread and returns
* immediately, setting the lock hold count to one.
*
* <p>If the current thread already holds this lock then the hold count
* is incremented by one and the method returns immediately.
*
* <p>If the lock is held by another thread then the
* current thread becomes disabled for thread scheduling
* purposes and lies dormant until one of two things happens:
*
* <ul>
*
* <li>The lock is acquired by the current thread; or
*
* <li>Some other thread {@linkplain Thread#interrupt interrupts} the
* current thread.
*
* </ul>
*
* <p>If the lock is acquired by the current thread then the lock hold
* count is set to one.
*
* <p>If the current thread:
*
* <ul>
*
* <li>has its interrupted status set on entry to this method; or
*
* <li>is {@linkplain Thread#interrupt interrupted} while acquiring
* the lock,
*
* </ul>
*
* then {@link InterruptedException} is thrown and the current thread's
* interrupted status is cleared.
*
* <p>In this implementation, as this method is an explicit
* interruption point, preference is given to responding to the
* interrupt over normal or reentrant acquisition of the lock.
*
* @throws InterruptedException if the current thread is interrupted
*/
public void lockInterruptibly() throws InterruptedException {
sync.lockInterruptibly();
}
/**
* Acquires the lock only if it is not held by another thread at the time
* of invocation.
*
* <p>Acquires the lock if it is not held by another thread and
* returns immediately with the value {@code true}, setting the
* lock hold count to one. Even when this lock has been set to use a
* fair ordering policy, a call to {@code tryLock()} <em>will</em>
* immediately acquire the lock if it is available, whether or not
* other threads are currently waiting for the lock.
* This "barging" behavior can be useful in certain
* circumstances, even though it breaks fairness. If you want to honor
* the fairness setting for this lock, then use
* {@link #tryLock(long, TimeUnit) tryLock(0, TimeUnit.SECONDS)}
* which is almost equivalent (it also detects interruption).
*
* <p>If the current thread already holds this lock then the hold
* count is incremented by one and the method returns {@code true}.
*
* <p>If the lock is held by another thread then this method will return
* immediately with the value {@code false}.
*
* @return {@code true} if the lock was free and was acquired by the
* current thread, or the lock was already held by the current
* thread; and {@code false} otherwise
*/
public boolean tryLock() {
return sync.tryLock();
}
/**
* Acquires the lock if it is not held by another thread within the given
* waiting time and the current thread has not been
* {@linkplain Thread#interrupt interrupted}.
*
* <p>Acquires the lock if it is not held by another thread and returns
* immediately with the value {@code true}, setting the lock hold count
* to one. If this lock has been set to use a fair ordering policy then
* an available lock <em>will not</em> be acquired if any other threads
* are waiting for the lock. This is in contrast to the {@link #tryLock()}
* method. If you want a timed {@code tryLock} that does permit barging on
* a fair lock then combine the timed and un-timed forms together:
*
* <pre> {@code
* if (lock.tryLock() ||
* lock.tryLock(timeout, unit)) {
* ...
* }}</pre>
*
* <p>If the current thread
* already holds this lock then the hold count is incremented by one and
* the method returns {@code true}.
*
* <p>If the lock is held by another thread then the
* current thread becomes disabled for thread scheduling
* purposes and lies dormant until one of three things happens:
*
* <ul>
*
* <li>The lock is acquired by the current thread; or
*
* <li>Some other thread {@linkplain Thread#interrupt interrupts}
* the current thread; or
*
* <li>The specified waiting time elapses
*
* </ul>
*
* <p>If the lock is acquired then the value {@code true} is returned and
* the lock hold count is set to one.
*
* <p>If the current thread:
*
* <ul>
*
* <li>has its interrupted status set on entry to this method; or
*
* <li>is {@linkplain Thread#interrupt interrupted} while
* acquiring the lock,
*
* </ul>
* then {@link InterruptedException} is thrown and the current thread's
* interrupted status is cleared.
*
* <p>If the specified waiting time elapses then the value {@code false}
* is returned. If the time is less than or equal to zero, the method
* will not wait at all.
*
* <p>In this implementation, as this method is an explicit
* interruption point, preference is given to responding to the
* interrupt over normal or reentrant acquisition of the lock, and
* over reporting the elapse of the waiting time.
*
* @param timeout the time to wait for the lock
* @param unit the time unit of the timeout argument
* @return {@code true} if the lock was free and was acquired by the
* current thread, or the lock was already held by the current
* thread; and {@code false} if the waiting time elapsed before
* the lock could be acquired
* @throws InterruptedException if the current thread is interrupted
* @throws NullPointerException if the time unit is null
*/
public boolean tryLock(long timeout, TimeUnit unit)
throws InterruptedException {
return sync.tryLockNanos(unit.toNanos(timeout));
}
/**
* Attempts to release this lock.
*
* <p>If the current thread is the holder of this lock then the hold
* count is decremented. If the hold count is now zero then the lock
* is released. If the current thread is not the holder of this
* lock then {@link IllegalMonitorStateException} is thrown.
*
* @throws IllegalMonitorStateException if the current thread does not
* hold this lock
*/
public void unlock() {
sync.release(1);
}
/**
* Returns a {@link Condition} instance for use with this
* {@link Lock} instance.
*
* <p>The returned {@link Condition} instance supports the same
* usages as do the {@link Object} monitor methods ({@link
* Object#wait() wait}, {@link Object#notify notify}, and {@link
* Object#notifyAll notifyAll}) when used with the built-in
* monitor lock.
*
* <ul>
*
* <li>If this lock is not held when any of the {@link Condition}
* {@linkplain Condition#await() waiting} or {@linkplain
* Condition#signal signalling} methods are called, then an {@link
* IllegalMonitorStateException} is thrown.
*
* <li>When the condition {@linkplain Condition#await() waiting}
* methods are called the lock is released and, before they
* return, the lock is reacquired and the lock hold count restored
* to what it was when the method was called.
*
* <li>If a thread is {@linkplain Thread#interrupt interrupted}
* while waiting then the wait will terminate, an {@link
* InterruptedException} will be thrown, and the thread's
* interrupted status will be cleared.
*
* <li>Waiting threads are signalled in FIFO order.
*
* <li>The ordering of lock reacquisition for threads returning
* from waiting methods is the same as for threads initially
* acquiring the lock, which is in the default case not specified,
* but for <em>fair</em> locks favors those threads that have been
* waiting the longest.
*
* </ul>
*
* @return the Condition object
*/
public Condition newCondition() {
return sync.newCondition();
}
/**
* Queries the number of holds on this lock by the current thread.
*
* <p>A thread has a hold on a lock for each lock action that is not
* matched by an unlock action.
*
* <p>The hold count information is typically only used for testing and
* debugging purposes. For example, if a certain section of code should
* not be entered with the lock already held then we can assert that
* fact:
*
* <pre> {@code
* class X {
* final ReentrantLock lock = new ReentrantLock();
* // ...
* public void m() {
* assert lock.getHoldCount() == 0;
* lock.lock();
* try {
* // ... method body
* } finally {
* lock.unlock();
* }
* }
* }}</pre>
*
* @return the number of holds on this lock by the current thread,
* or zero if this lock is not held by the current thread
*/
public int getHoldCount() {
return sync.getHoldCount();
}
/**
* Queries if this lock is held by the current thread.
*
* <p>Analogous to the {@link Thread#holdsLock(Object)} method for
* built-in monitor locks, this method is typically used for
* debugging and testing. For example, a method that should only be
* called while a lock is held can assert that this is the case:
*
* <pre> {@code
* class X {
* final ReentrantLock lock = new ReentrantLock();
* // ...
*
* public void m() {
* assert lock.isHeldByCurrentThread();
* // ... method body
* }
* }}</pre>
*
* <p>It can also be used to ensure that a reentrant lock is used
* in a non-reentrant manner, for example:
*
* <pre> {@code
* class X {
* final ReentrantLock lock = new ReentrantLock();
* // ...
*
* public void m() {
* assert !lock.isHeldByCurrentThread();
* lock.lock();
* try {
* // ... method body
* } finally {
* lock.unlock();
* }
* }
* }}</pre>
*
* @return {@code true} if current thread holds this lock and
* {@code false} otherwise
*/
public boolean isHeldByCurrentThread() {
return sync.isHeldExclusively();
}
/**
* Queries if this lock is held by any thread. This method is
* designed for use in monitoring of the system state,
* not for synchronization control.
*
* @return {@code true} if any thread holds this lock and
* {@code false} otherwise
*/
public boolean isLocked() {
return sync.isLocked();
}
/**
* Returns {@code true} if this lock has fairness set true.
*
* @return {@code true} if this lock has fairness set true
*/
public final boolean isFair() {
return sync instanceof FairSync;
}
/**
* Returns the thread that currently owns this lock, or
* {@code null} if not owned. When this method is called by a
* thread that is not the owner, the return value reflects a
* best-effort approximation of current lock status. For example,
* the owner may be momentarily {@code null} even if there are
* threads trying to acquire the lock but have not yet done so.
* This method is designed to facilitate construction of
* subclasses that provide more extensive lock monitoring
* facilities.
*
* @return the owner, or {@code null} if not owned
*/
protected Thread getOwner() {
return sync.getOwner();
}
/**
* Queries whether any threads are waiting to acquire this lock. Note that
* because cancellations may occur at any time, a {@code true}
* return does not guarantee that any other thread will ever
* acquire this lock. This method is designed primarily for use in
* monitoring of the system state.
*
* @return {@code true} if there may be other threads waiting to
* acquire the lock
*/
public final boolean hasQueuedThreads() {
return sync.hasQueuedThreads();
}
/**
* Queries whether the given thread is waiting to acquire this
* lock. Note that because cancellations may occur at any time, a
* {@code true} return does not guarantee that this thread
* will ever acquire this lock. This method is designed primarily for use
* in monitoring of the system state.
*
* @param thread the thread
* @return {@code true} if the given thread is queued waiting for this lock
* @throws NullPointerException if the thread is null
*/
public final boolean hasQueuedThread(Thread thread) {
return sync.isQueued(thread);
}
/**
* Returns an estimate of the number of threads waiting to acquire
* this lock. The value is only an estimate because the number of
* threads may change dynamically while this method traverses
* internal data structures. This method is designed for use in
* monitoring system state, not for synchronization control.
*
* @return the estimated number of threads waiting for this lock
*/
public final int getQueueLength() {
return sync.getQueueLength();
}
/**
* Returns a collection containing threads that may be waiting to
* acquire this lock. Because the actual set of threads may change
* dynamically while constructing this result, the returned
* collection is only a best-effort estimate. The elements of the
* returned collection are in no particular order. This method is
* designed to facilitate construction of subclasses that provide
* more extensive monitoring facilities.
*
* @return the collection of threads
*/
protected Collection<Thread> getQueuedThreads() {
return sync.getQueuedThreads();
}
/**
* Queries whether any threads are waiting on the given condition
* associated with this lock. Note that because timeouts and
* interrupts may occur at any time, a {@code true} return does
* not guarantee that a future {@code signal} will awaken any
* threads. This method is designed primarily for use in
* monitoring of the system state.
*
* @param condition the condition
* @return {@code true} if there are any waiting threads
* @throws IllegalMonitorStateException if this lock is not held
* @throws IllegalArgumentException if the given condition is
* not associated with this lock
* @throws NullPointerException if the condition is null
*/
public boolean hasWaiters(Condition condition) {
if (condition == null)
throw new NullPointerException();
if (!(condition instanceof AbstractQueuedSynchronizer.ConditionObject))
throw new IllegalArgumentException("not owner");
return sync.hasWaiters((AbstractQueuedSynchronizer.ConditionObject)condition);
}
/**
* Returns an estimate of the number of threads waiting on the
* given condition associated with this lock. Note that because
* timeouts and interrupts may occur at any time, the estimate
* serves only as an upper bound on the actual number of waiters.
* This method is designed for use in monitoring of the system
* state, not for synchronization control.
*
* @param condition the condition
* @return the estimated number of waiting threads
* @throws IllegalMonitorStateException if this lock is not held
* @throws IllegalArgumentException if the given condition is
* not associated with this lock
* @throws NullPointerException if the condition is null
*/
public int getWaitQueueLength(Condition condition) {
if (condition == null)
throw new NullPointerException();
if (!(condition instanceof AbstractQueuedSynchronizer.ConditionObject))
throw new IllegalArgumentException("not owner");
return sync.getWaitQueueLength((AbstractQueuedSynchronizer.ConditionObject)condition);
}
/**
* Returns a collection containing those threads that may be
* waiting on the given condition associated with this lock.
* Because the actual set of threads may change dynamically while
* constructing this result, the returned collection is only a
* best-effort estimate. The elements of the returned collection
* are in no particular order. This method is designed to
* facilitate construction of subclasses that provide more
* extensive condition monitoring facilities.
*
* @param condition the condition
* @return the collection of threads
* @throws IllegalMonitorStateException if this lock is not held
* @throws IllegalArgumentException if the given condition is
* not associated with this lock
* @throws NullPointerException if the condition is null
*/
protected Collection<Thread> getWaitingThreads(Condition condition) {
if (condition == null)
throw new NullPointerException();
if (!(condition instanceof AbstractQueuedSynchronizer.ConditionObject))
throw new IllegalArgumentException("not owner");
return sync.getWaitingThreads((AbstractQueuedSynchronizer.ConditionObject)condition);
}
/**
* Returns a string identifying this lock, as well as its lock state.
* The state, in brackets, includes either the String {@code "Unlocked"}
* or the String {@code "Locked by"} followed by the
* {@linkplain Thread#getName name} of the owning thread.
*
* @return a string identifying this lock, as well as its lock state
*/
public String toString() {
Thread o = sync.getOwner();
return super.toString() + ((o == null) ?
"[Unlocked]" :
"[Locked by thread " + o.getName() + "]");
}
}
3.2.非公平锁的实现原理
3.2.1.构造方法
public ReentrantLock() {
sync = new NonfairSync();
}
3.2.2.加锁
public void lock() {
sync.lock();
}
@ReservedStackAccess
final void lock() {
if (!initialTryLock())
acquire(1);
}
final boolean initialTryLock() {
Thread current = Thread.currentThread();
if (compareAndSetState(0, 1)) { // first attempt is unguarded
setExclusiveOwnerThread(current);
return true;
} else if (getExclusiveOwnerThread() == current) {
int c = getState() + 1;
if (c < 0) // overflow
throw new Error("Maximum lock count exceeded");
setState(c);
return true;
} else
return false;
}
3.3.3.使用事例
package org.example.n8;
import lombok.extern.slf4j.Slf4j;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantLock;
@Slf4j(topic = "c.TestReentrantLock")
public class TestReentrantLock {
public static void main(String[] args) {
ReentrantLock lock = new ReentrantLock();
// test加锁(lock);
// test可重入锁(lock);
// test尝试获取锁(lock);
// test超时获取锁(lock);
test可打断锁(lock);
}
private static void test可打断锁(ReentrantLock lock) {
Thread t1 = new Thread(() -> {
try {
TimeUnit.SECONDS.sleep(1);
lock.lockInterruptibly();
} catch (InterruptedException e) {
System.out.println(Thread.currentThread().isInterrupted());
log.debug("t1线程在获取锁的时候被打断...");
throw new RuntimeException(e);
}
try {
log.debug("locking...");
} finally {
log.debug("unlocking...");
lock.unlock();
}
}, "t1");
new Thread(()->{
lock.lock();
try {
log.debug("locking...");
TimeUnit.SECONDS.sleep(10);
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
lock.unlock();
}
},"t2").start();
t1.start();
t1.interrupt();
}
private static void test超时获取锁(ReentrantLock lock) {
new Thread(()->{
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
try {
if (lock.tryLock(2, TimeUnit.SECONDS)) {
try {
log.debug("locking...");
} finally {
log.debug("unlocking...");
lock.unlock();
}
}else {
log.debug("获取锁失败...");
}
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
},"t1").start();
new Thread(()->{
lock.lock();
try {
log.debug("locking...");
TimeUnit.SECONDS.sleep(2);
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
lock.unlock();
}
},"t2").start();
}
private static void test尝试获取锁(ReentrantLock lock) {
new Thread(()->{
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
if (lock.tryLock()) {
try {
log.debug("locking...");
} finally {
log.debug("unlocking...");
lock.unlock();
}
}else {
log.debug("获取锁失败...");
}
},"t1").start();
new Thread(()->{
lock.lock();
try {
TimeUnit.SECONDS.sleep(2);
log.debug("locking...");
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
lock.unlock();
}
},"t2").start();
}
private static void test可重入锁(ReentrantLock lock) {
new Thread(()->{
lock.lock();
log.debug("第一次加锁locking...");
lock.lock();
log.debug("第二次加锁locking...");
try {
log.debug("locking...");
} finally {
log.debug("unlocking...");
lock.unlock();
}
},"t1").start();
}
private static void test加锁(ReentrantLock lock) {
new Thread(()->{
lock.lock();
try {
log.debug("locking...");
TimeUnit.SECONDS.sleep(2);
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
lock.unlock();
}
},"t1").start();
new Thread(()->{
lock.lock();
try {
log.debug("locking...");
} finally {
lock.unlock();
}
},"t2").start();
}
}
4.读写锁ReentrantReadWriteLock
4.1.使用事例
package org.example.n8;
import lombok.extern.slf4j.Slf4j;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantReadWriteLock;
@Slf4j(topic = "c.TestReadWriteLock")
public class TestReadWriteLock {
public static void main(String[] args) throws InterruptedException {
DataContainer dc = new DataContainer();
// test读锁不阻塞(dc);
// test读写阻塞(dc);
test读读阻塞(dc);
}
private static void test读读阻塞(DataContainer dc) throws InterruptedException {
new Thread(() -> {
dc.write();
}, "t1").start();
Thread.sleep(100);
new Thread(() -> {
dc.write();
}, "t2").start();
}
private static void test读写阻塞(DataContainer dc) throws InterruptedException {
new Thread(() -> {
dc.read();
}, "t1").start();
Thread.sleep(100);
new Thread(() -> {
dc.write();
}, "t2").start();
}
private static void test读锁不阻塞(DataContainer dc) {
new Thread(() -> {
dc.read();
}, "t1").start();
new Thread(() -> {
dc.read();
}, "t2").start();
}
}
// 定义一个数据容器
@Slf4j(topic = "c.DataContainer")
class DataContainer {
private Object data;
private ReentrantReadWriteLock rw = new ReentrantReadWriteLock();
private ReentrantReadWriteLock.ReadLock r = rw.readLock();
private ReentrantReadWriteLock.WriteLock w = rw.writeLock();
public Object read() {
log.debug("获取读锁...");
r.lock();
try {
log.debug("读取数据...");
TimeUnit.SECONDS.sleep(1);
return data;
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
log.debug("释放读锁...");
r.unlock();
}
}
public void write() {
log.debug("写入数据...");
w.lock();
try {
log.debug("获取写锁...");
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
log.debug("释放写锁...");
w.unlock();
}
}
}
4.2.注意事项
class CachedData {
Object data;
// 是否有效,如果失效,需要重新计算 data
volatile boolean cacheValid;
final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
void processCachedData() {
rwl.readLock().lock();
if (!cacheValid) {
// 获取写锁前必须释放读锁
rwl.readLock().unlock();
rwl.writeLock().lock();
try {
// 判断是否有其它线程已经获取了写锁、更新了缓存, 避免重复更新
if (!cacheValid) {
data = ...
cacheValid = true;
}
// 降级为读锁, 释放写锁, 这样能够让其它线程读取缓存
rwl.readLock().lock();
} finally {
rwl.writeLock().unlock();
}
}
// 自己用完数据, 释放读锁
try {
use(data);
} finally {
rwl.readLock().unlock();
}
}
}
4.3.应用
class GenericCachedDao<T> {
// HashMap 作为缓存非线程安全, 需要保护
HashMap<SqlPair, T> map = new HashMap<>();
ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
GenericDao genericDao = new GenericDao();
public int update(String sql, Object... params) {
SqlPair key = new SqlPair(sql, params);
// 加写锁, 防止其它线程对缓存读取和更改
lock.writeLock().lock();
try {
int rows = genericDao.update(sql, params);
map.clear();
return rows;
} finally {
lock.writeLock().unlock();
}
}
public T queryOne(Class<T> beanClass, String sql, Object... params) {
SqlPair key = new SqlPair(sql, params);
// 加读锁, 防止其它线程对缓存更改
lock.readLock().lock();
try {
T value = map.get(key);
if (value != null) {
return value;
}
} finally {
lock.readLock().unlock();
}
// 加写锁, 防止其它线程对缓存读取和更改
lock.writeLock().lock();
try {
// get 方法上面部分是可能多个线程进来的, 可能已经向缓存填充了数据
// 为防止重复查询数据库, 再次验证
T value = map.get(key);
if (value == null) {
// 如果没有, 查询数据库
value = genericDao.queryOne(beanClass, sql, params);
map.put(key, value);
}
return value;
} finally {
lock.writeLock().unlock();
}
}
// 作为 key 保证其是不可变的
class SqlPair {
private String sql;
private Object[] params;
public SqlPair(String sql, Object[] params) {
this.sql = sql;
this.params = params;
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
SqlPair sqlPair = (SqlPair) o;
return sql.equals(sqlPair.sql) &&
Arrays.equals(params, sqlPair.params);
}
@Override
public int hashCode() {
int result = Objects.hash(sql);
result = 31 * result + Arrays.hashCode(params);
return result;
}
}
}
注意
4.4.原理
state的高16为表示读锁,state的低16位表示写锁,加锁是使用cas修改相对应的装状态,具体存在区别。
5.StampedLock读写锁
StampedLock 是 Java 8 引入的一种高性能读写锁,位于 java.util.concurrent.locks 包下。它通过版本戳(stamp) 机制优化了传统读写锁(如 ReentrantReadWriteLock)的性能,支持三种模式的锁操作,适用于读多写少的场景,能显著提升并发效率。
5.1.核心设计理念
StampedLock 的核心是用一个 long 类型的 “版本戳(stamp)” 表示锁的状态,不同的锁模式对应不同的戳值(如正数、负数、零)。线程获取锁时会得到一个戳,释放锁或转换锁模式时需要传入这个戳进行验证,确保操作的原子性和正确性。
相比 ReentrantReadWriteLock,它的优势在于:
- 支持乐观读模式(非阻塞),读操作无需加锁,适合读操作远多于写操作的场景。
- 读写锁不支持重入(简化设计,提升性能),但通过戳机制实现了更灵活的锁转换。
- 写锁与读锁互斥,读锁之间不互斥(同传统读写锁),但乐观读模式下读操作完全无阻塞。
加读锁
long stamp = lock.readLock();
lock.unlockRead(stamp)
加读锁
long stamp = lock.writeLock();
lock.unlockWrite(stamp);
long stamp = lock.tryOptimisticRead();
// 验戳
if(!lock.validate(stamp)){
// 锁升级
}
5.2.使用事例
package org.example.n8;
import lombok.extern.slf4j.Slf4j;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.StampedLock;
public class TestStampedLock {
public static void main(String[] args) throws InterruptedException {
DataContainerStamped dc = new DataContainerStamped(1);
// test读读不加锁(dc);
// test乐观度写锁升级(dc);
}
private static void test乐观度写锁升级(DataContainerStamped dc) throws InterruptedException {
new Thread(() -> {
dc.read(1);
}, "t1").start();
Thread.sleep(500);
new Thread(() -> {
dc.write(1000);
}, "t2").start();
}
private static void test读读不加锁(DataContainerStamped dc) throws InterruptedException {
new Thread(() -> {
dc.read(1);
}, "t1").start();
Thread.sleep(500);
new Thread(() -> {
dc.read(0);
}, "t2").start();
}
}
@Slf4j(topic = "c.DataContainerStamped")
class DataContainerStamped {
private int data;
private final StampedLock lock = new StampedLock();
public DataContainerStamped(int data) {
this.data = data;
}
public int read(int readTime) {
long stamp = lock.tryOptimisticRead();
log.debug("optimistic read locking...{}", stamp);
try {
TimeUnit.SECONDS.sleep(readTime);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
if(lock.validate(stamp)){
log.debug("read finish...{}", stamp);
return data;
}
// 锁升级
log.debug("upgrade read locking...{}", stamp);
try {
stamp = lock.readLock();
log.debug("read lock...{}", stamp);
TimeUnit.SECONDS.sleep(readTime);
log.debug("read finish...{}", stamp);
return data;
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
log.debug("read unlock {}", stamp);
lock.unlockRead(stamp);
}
}
public void write(int newData) {
long stamp = lock.writeLock();
log.debug("write lock {}", stamp);
try {
TimeUnit.SECONDS.sleep(2);
data = newData;
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
log.debug("write unlock {}", stamp);
lock.unlockWrite(stamp);
}
}
}
5.3.注意
StampedLock不支持锁重入
StampedLock不支持条件变量
ReentrantReadWriteLock 的写锁支持条件变量(Condition),但读锁不支持。这是由读写锁的设计逻辑决定的:写锁是独占锁(同一时间仅一个线程持有),符合条件变量对独占性的要求;而读锁是共享锁(多个线程可同时持有),条件变量无法在共享模式下正常工作。
6.Semaphore信号量
6.1.使用事例
package org.example.n8;
import lombok.extern.slf4j.Slf4j;
import java.util.concurrent.Semaphore;
import java.util.concurrent.TimeUnit;
@Slf4j(topic = "c.TestSemaphore")
public class TestSemaphore {
public static void main(String[] args) {
Semaphore semaphore = new Semaphore(3);
for (int i = 0; i < 10; i++) {
new Thread(() -> {
try {
semaphore.acquire();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
try {
log.debug("running...");
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
log.debug("end...");
} finally {
semaphore.release();
}
}, "t" + (1+i)).start();
}
}
}
6.2.Semaphore应用
@Slf4j(topic = "c.Pool")
class Pool {
// 1. 连接池大小
private final int poolSize;
// 2. 连接对象数组
private Connection[] connections;
// 3. 连接状态数组 0 表示空闲, 1 表示繁忙
private AtomicIntegerArray states;
private Semaphore semaphore;
// 4. 构造方法初始化
public Pool(int poolSize) {
this.poolSize = poolSize;
// 让许可数与资源数一致
this.semaphore = new Semaphore(poolSize);
this.connections = new Connection[poolSize];
this.states = new AtomicIntegerArray(new int[poolSize]);
for (int i = 0; i < poolSize; i++) {
connections[i] = new MockConnection("连接" + (i+1));
}
}
// 5. 借连接
public Connection borrow() {// t1, t2, t3
// 获取许可
try {
semaphore.acquire(); // 没有许可的线程,在此等待
} catch (InterruptedException e) {
e.printStackTrace();
}
for (int i = 0; i < poolSize; i++) {
// 获取空闲连接
if(states.get(i) == 0) {
if (states.compareAndSet(i, 0, 1)) {
log.debug("borrow {}", connections[i]);
return connections[i];
}
}
}
// 不会执行到这里
return null;
}
// 6. 归还连接
public void free(Connection conn) {
for (int i = 0; i < poolSize; i++) {
if (connections[i] == conn) {
states.set(i, 0);
log.debug("free {}", conn);
semaphore.release();
break;
}
}
}
}
7.CountdownLatch倒计时锁
CountDownLatch 是 Java 并发包(java.util.concurrent)中的一个同步工具类,用于协调多个线程之间的执行顺序,核心作用是:让一个或多个线程等待其他线程完成一系列操作后,再继续执行。
7.1.核心原理
CountDownLatch 基于一个计数器工作:
- 初始化时指定计数器的初始值(例如
new CountDownLatch(3)表示需要等待 3 个操作完成)。 - 当一个线程完成任务后,调用
countDown()方法,计数器的值减 1。 - 等待的线程调用
await()方法,会阻塞直到计数器的值变为 0,此时所有等待的线程被唤醒,继续执行。
可以理解为:CountDownLatch 是一个 “发令枪”,多个线程准备完毕后 “倒计时”,直到最后一个线程完成,所有等待的线程才开始行动。
7.2.关键特性
计数器不可重置:一旦计数器的值减到 0,就无法再恢复到初始值(这一点与 CyclicBarrier 不同,后者可重用)。
多线程协作:可以让多个线程等待一个线程(如主线程等待所有子线程完成),也可以让一个线程等待多个线程(如多个前置任务完成后再执行主线程)。
灵活的等待机制:支持限时等待(await(long timeout, TimeUnit unit)),避免永久阻塞。
7.3.适用场景
并行任务协调:主线程等待多个子线程完成并行任务(如数据分片处理,所有分片完成后合并结果)。
初始化校验:应用启动时,主线程等待多个组件(如数据库连接、缓存加载)初始化完成后,再对外提供服务。
倒计时触发:多个线程同时准备,直到最后一个线程准备完毕,所有线程同时开始执行(如比赛开始前的 “各就各位,预备 —— 开始”)。
7.4.使用事例
package org.example.n8;
import lombok.extern.slf4j.Slf4j;
import java.util.concurrent.CountDownLatch;
@Slf4j(topic = "c.TestCountDownLatch")
public class TestCountDownLatch {
public static void main(String[] args) {
CountDownLatch latch = new CountDownLatch(3);
new Thread(() -> {
log.debug("子线程1开始");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
log.debug("子线程1结束");
latch.countDown();
}).start();
new Thread(() -> {
log.debug("子线程2开始");
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
log.debug("子线程2结束");
latch.countDown();
}).start();
new Thread(() -> {
log.debug("子线程3开始");
try {
Thread.sleep(1500);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
log.debug("子线程3结束");
latch.countDown();
}).start();
try {
log.debug("主线程wait...");
latch.await();
log.debug("主线程wait end...");
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
}
7.5.CountDownLatch和join实现线程等待有什么区别
使用join必须等待其他线程等待线程结束,主线程才能继续运行,但是使用CountDownLatch,不需要其他线程结束,只要其他线程都调用countDown方法,主线程就可以运行,也就是说CountDownLatch可以配置线程池使用。使用事例如下:
public static void main(String[] args) {
CountDownLatch latch = new CountDownLatch(3);
ExecutorService executorService = Executors.newFixedThreadPool(4);
executorService.submit(() -> {
log.debug("子线程1开始");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
log.debug("子线程1结束");
latch.countDown();
});
executorService.submit(() -> {
log.debug("子线程2开始");
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
log.debug("子线程2结束");
latch.countDown();
});
executorService.submit(() -> {
log.debug("子线程3开始");
try {
Thread.sleep(1500);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
log.debug("子线程3结束");
latch.countDown();
});
executorService.submit(() -> {
log.debug("主线程开始");
try {
latch.await();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
log.debug("主线程结束");
});
}
7.6.游戏玩家加载
private static void test游戏玩家加载() throws InterruptedException {
ExecutorService executorService = Executors.newFixedThreadPool(10);
Random random = new Random();
CountDownLatch latch = new CountDownLatch(10);
String[] all = new String[10];
for (int j = 0; j < 10; j++) {
int index = j;
executorService.submit(() -> {
for (int i = 0; i <= 100; i++) {
try {
Thread.sleep(random.nextInt(100));
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
all[index] = i+"%";
System.out.print("\r"+Arrays.toString(all));
}
latch.countDown();
});
}
latch.await();
System.out.println();
System.out.println("游戏开始....");
executorService.shutdown();
}
8.CyclicBarrier循环屏障
CyclicBarrier(循环屏障)是 Java 并发包(java.util.concurrent)中的同步工具类,用于让一组线程互相到达某个屏障点(Barrier)后再同时继续执行,且支持重复使用(这是它与 CountDownLatch 的核心区别)。
8.1核心原理
CyclicBarrier 基于 “屏障” 的概念工作:
- 初始化时指定参与的线程数量( parties ) 和一个屏障动作( barrierAction ,可选)。
- 每个线程执行到屏障点时,调用
await()方法,该线程会被阻塞,直到所有参与线程都到达屏障点。 - 当最后一个线程到达屏障点后:
- 若指定了
barrierAction,则由最后到达的线程执行该动作(如汇总结果、日志记录)。 - 所有阻塞的线程被同时唤醒,继续执行后续逻辑。
- 若指定了
- 屏障可重复使用:所有线程通过屏障后,
CyclicBarrier会重置状态,允许下一轮线程再次使用。
8.2.关键特性
- 循环性:与
CountDownLatch一次性使用不同,CyclicBarrier可多次重复使用(通过reset()方法手动重置,或自动重置)。 - 屏障动作:支持在所有线程到达后,由最后一个线程执行一个统一的动作(如数据汇总)。
- 中断与超时:
await()方法支持中断(抛出InterruptedException)和超时(await(long timeout, TimeUnit unit)),避免线程永久阻塞。
8.3.适用场景
- 多线程协同任务:如数据分片计算(每个线程算一部分,全部完成后汇总)、多阶段任务(所有线程完成第一阶段后,再同时开始第二阶段)。
- 并发测试:让多个线程同时开始执行测试逻辑,确保测试的公平性(如模拟 100 个用户同时登录)。
- 循环任务场景:需要重复执行多线程协同任务的场景(如定时批量处理,每批任务都需要多线程配合)。
8.4.使用事例
package org.example.n8;
import lombok.extern.slf4j.Slf4j;
import java.util.concurrent.CyclicBarrier;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
@Slf4j(topic = "c.TestCyclicBarrier")
public class TestCyclicBarrier {
public static void main(String[] args) {
CyclicBarrier cyclicBarrier = new CyclicBarrier(2, () -> {
log.debug("task1 task2 finish...");
});
ExecutorService executorService = Executors.newFixedThreadPool(2);
for (int i = 0; i < 3; i++) {
executorService.submit(() -> {
log.debug("task1 begin...");
try {
Thread.sleep(1000);
cyclicBarrier.await();
} catch (Exception e) {
e.printStackTrace();
}
log.debug("task1 end...");
});
executorService.submit(() -> {
log.debug("task2 begin...");
try {
Thread.sleep(2000);
cyclicBarrier.await();
}catch (Exception e){
e.printStackTrace();
}
log.debug("task2 end...");
});
}
executorService.shutdown();
}
}
9.线程安全类集合
9.1.ConcurrentHashMap
9.1.1.使用事例
单词计数
demo(
() -> new ConcurrentHashMap<String, LongAdder>(),
(map, words) -> {
for (String word : words) {
// 注意不能使用 putIfAbsent,此方法返回的是上一次的 value,首次调用返回 null
map.computeIfAbsent(word, (key) -> new LongAdder()).increment();
}
}
);
9.1.2.JDK7 HashMap并发死链
多线程环境下使用了非线程安全的map集合,jdk7的HashMap在添加节点时,对于同一个桶添加元素时采用头插法,在多个线程在添加元素时,同时发生扩容,就会发生死链。
JDK8虽然不采用头插法,但是在多线程环境下扩容会出现其他问题,比如扩容丢失数据。
9.1.3.JDK8的ConcurrentHashMap
// 默认为 0
// 当初始化时, 为 -1
// 当扩容时, 为 -(1 + 扩容线程数)
// 当初始化或扩容完成后,为 下一次的扩容的阈值大小
private transient volatile int sizeCtl;
// 整个 ConcurrentHashMap 就是一个 Node[]
static class Node<K,V> implements Map.Entry<K,V> {}
// hash 表
transient volatile Node<K,V>[] table;
// 扩容时的 新 hash 表
private transient volatile Node<K,V>[] nextTable;
// 扩容时如果某个 bin 迁移完毕, 用 ForwardingNode 作为旧 table bin 的头结点
static final class ForwardingNode<K,V> extends Node<K,V> {}
// 用在 compute 以及 computeIfAbsent 时, 用来占位, 计算完成后替换为普通 Node
static final class ReservationNode<K,V> extends Node<K,V> {}
// 作为 treebin 的头节点, 存储 root 和 first
static final class TreeBin<K,V> extends Node<K,V> {}
// 作为 treebin 的节点, 存储 parent, left, right
static final class TreeNode<K,V> extends Node<K,V> {}
重要方法
// 获取 Node[] 中第 i 个 Node
static final <K,V> Node<K,V> tabAt(Node<K,V>[] tab, int i)
// cas 修改 Node[] 中第 i 个 Node 的值, c 为旧值, v 为新值
static final <K,V> boolean casTabAt(Node<K,V>[] tab, int i, Node<K,V> c, Node<K,V> v)
// 直接修改 Node[] 中第 i 个 Node 的值, v 为新值
static final <K,V> void setTabAt(Node<K,V>[] tab, int i, Node<K,V> v)
构造器分析
// initialCapacity 初始容量,
// float loadFactor 扩容因子
// concurrencyLevel 并发度
public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) {
if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
if (initialCapacity < concurrencyLevel) // Use at least as many bins
initialCapacity = concurrencyLevel; // as estimated threads
long size = (long)(1.0 + (long)initialCapacity / loadFactor);
// tableSizeFor 仍然是保证计算的大小是 2^n, 即 16,32,64 ...
int cap = (size >= (long)MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY : tableSizeFor((int)size);
this.sizeCtl = cap;
}
get流程
public V get(Object key) {
Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek;
// spread 方法能确保返回结果是正数
int h = spread(key.hashCode());
if ((tab = table) != null && (n = tab.length) > 0 &&
(e = tabAt(tab, (n - 1) & h)) != null) {
// 如果头结点已经是要查找的 key
if ((eh = e.hash) == h) {
if ((ek = e.key) == key || (ek != null && key.equals(ek)))
return e.val;
}
// hash 为负数表示该 bin 在扩容中或是 treebin, 这时调用 find 方法来查找
else if (eh < 0)
return (p = e.find(h, key)) != null ? p.val : null;
// 正常遍历链表, 用 equals 比较
while ((e = e.next) != null) {
if (e.hash == h &&
((ek = e.key) == key || (ek != null && key.equals(ek))))
return e.val;
}
}
return null;
}
put方法
public V put(K key, V value) {
return putVal(key, value, false);
}
final V putVal(K key, V value, boolean onlyIfAbsent) {
if (key == null || value == null) throw new NullPointerException();
// 其中 spread 方法会综合高位低位, 具有更好的 hash 性
int hash = spread(key.hashCode());
int binCount = 0;
for (Node<K, V>[] tab = table; ; ) {
// f 是链表头节点
// fh 是链表头结点的 hash
// i 是链表在 table 中的下标
Node<K, V> f;
int n, i, fh;
// 要创建 table
if (tab == null || (n = tab.length) == 0)
// 初始化 table 使用了 cas, 无需 synchronized 创建成功, 进入下一轮循环
tab = initTable();
// 要创建链表头节点
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
// 添加链表头使用了 cas, 无需 synchronized
if (casTabAt(tab, i, null,
new Node<K, V>(hash, key, value, null)))
break;
}
// 帮忙扩容
else if ((fh = f.hash) == MOVED)
// 帮忙之后, 进入下一轮循环
tab = helpTransfer(tab, f);
else {
V oldVal = null;
// 锁住链表头节点
synchronized (f) {
// 再次确认链表头节点没有被移动
if (tabAt(tab, i) == f) {
// 链表
if (fh >= 0) {
binCount = 1;
// 遍历链表
for (Node<K, V> e = f; ; ++binCount) {
K ek;
// 找到相同的 key
if (e.hash == hash &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
oldVal = e.val;
// 更新
if (!onlyIfAbsent)
e.val = value;
break;
}
Node<K, V> pred = e;
// 已经是最后的节点了, 新增 Node, 追加至链表尾
if ((e = e.next) == null) {
pred.next = new Node<K, V>(hash, key,
value, null);
break;
}
}
}
// 红黑树
else if (f instanceof TreeBin) {
Node<K, V> p;
binCount = 2;
// putTreeVal 会看 key 是否已经在树中, 是, 则返回对应的 TreeNode
if ((p = ((TreeBin<K, V>) f).putTreeVal(hash, key,
value)) != null) {
oldVal = p.val;
if (!onlyIfAbsent)
p.val = value;
}
}
}
// 释放链表头节点的锁
}
if (binCount != 0) {
if (binCount >= TREEIFY_THRESHOLD)
// 如果链表长度 >= 树化阈值(8), 进行链表转为红黑树
treeifyBin(tab, i);
if (oldVal != null)
return oldVal;
break;
}
}
}
// 增加 size 计数
addCount(1L, binCount);
return null;
}
private final Node<K, V>[] initTable() {
Node<K, V>[] tab;
int sc;
while ((tab = table) == null || tab.length == 0) {
if ((sc = sizeCtl) < 0)
Thread.yield();
// 尝试将 sizeCtl 设置为 -1(表示初始化 table)
else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
// 获得锁, 创建 table, 这时其它线程会在 while() 循环中 yield 直至 table 创建
try {
if ((tab = table) == null || tab.length == 0) {
int n = (sc > 0) ? sc : DEFAULT_CAPACITY;
Node<K, V>[] nt = (Node<K, V>[]) new Node<?, ?>[n];
table = tab = nt;
sc = n - (n >>> 2);
}
} finally {
sizeCtl = sc;
}
break;
}
}
return tab;
}
// check 是之前 binCount 的个数
private final void addCount(long x, int check) {
CounterCell[] as;
long b, s;
if (
// 已经有了 counterCells, 向 cell 累加
(as = counterCells) != null ||
// 还没有, 向 baseCount 累加
!U.compareAndSwapLong(this, BASECOUNT, b = baseCount, s = b + x)
) {
CounterCell a;
long v;
int m;
boolean uncontended = true;
if (
// 还没有 counterCells
as == null || (m = as.length - 1) < 0 ||
// 还没有 cell
(a = as[ThreadLocalRandom.getProbe() & m]) == null ||
// cell cas 增加计数失败
!(uncontended = U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))
) {
// 创建累加单元数组和cell, 累加重试
fullAddCount(x, uncontended);
return;
}
if (check <= 1)
return;
// 获取元素个数
s = sumCount();
}
if (check >= 0) {
Node<K, V>[] tab, nt;
int n, sc;
while (s >= (long) (sc = sizeCtl) && (tab = table) != null &&
(n = tab.length) < MAXIMUM_CAPACITY) {
int rs = resizeStamp(n);
if (sc < 0) {
if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
transferIndex <= 0)
break;
// newtable 已经创建了,帮忙扩容
if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
transfer(tab, nt);
}
// 需要扩容,这时 newtable 未创建
else if (U.compareAndSwapInt(this, SIZECTL, sc,
(rs << RESIZE_STAMP_SHIFT) + 2))
transfer(tab, null);
s = sumCount();
}
}
}
9.1.4.JDK7的ConcurrentHashMap
构造器分析
public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) {
if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
if (concurrencyLevel > MAX_SEGMENTS)
concurrencyLevel = MAX_SEGMENTS;
// ssize 必须是 2^n, 即 2, 4, 8, 16 ... 表示了 segments 数组的大小
int sshift = 0;
int ssize = 1;
while (ssize < concurrencyLevel) {
++sshift;
ssize <<= 1;
}
// segmentShift 默认是 32 - 4 = 28
this.segmentShift = 32 - sshift;
// segmentMask 默认是 15 即 0000 0000 0000 1111
this.segmentMask = ssize - 1;
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
int c = initialCapacity / ssize;
if (c * ssize < initialCapacity)
++c;
int cap = MIN_SEGMENT_TABLE_CAPACITY;
while (cap < c)
cap <<= 1;
// 创建 segments and segments[0]
Segment<K,V> s0 =
new Segment<K,V>(loadFactor, (int)(cap * loadFactor),
(HashEntry<K,V>[])new HashEntry[cap]);
Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize];
UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0]
this.segments = ss;
}
put流程调用了Segment对象的put方法
public V put(K key, V value) {
Segment<K,V> s;
if (value == null)
throw new NullPointerException();
int hash = hash(key);
// 计算出 segment 下标
int j = (hash >>> segmentShift) & segmentMask;
// 获得 segment 对象, 判断是否为 null, 是则创建该 segment
if ((s = (Segment<K,V>)UNSAFE.getObject
(segments, (j << SSHIFT) + SBASE)) == null) {
// 这时不能确定是否真的为 null, 因为其它线程也发现该 segment 为 null,
// 因此在 ensureSegment 里用 cas 方式保证该 segment 安全性
s = ensureSegment(j);
}
// 进入 segment 的put 流程
return s.put(key, hash, value, false);
}
Segment对象的put方法,Segment继承了ReentrantLock
final V put(K key, int hash, V value, boolean onlyIfAbsent) {
// 尝试加锁
HashEntry<K,V> node = tryLock() ? null :
// 如果不成功, 进入 scanAndLockForPut 流程
// 如果是多核 cpu 最多 tryLock 64 次, 进入 lock 流程
// 在尝试期间, 还可以顺便看该节点在链表中有没有, 如果没有顺便创建出来
scanAndLockForPut(key, hash, value);
// 执行到这里 segment 已经被成功加锁, 可以安全执行
V oldValue;
try {
HashEntry<K,V>[] tab = table;
int index = (tab.length - 1) & hash;
HashEntry<K,V> first = entryAt(tab, index);
for (HashEntry<K,V> e = first;;) {
if (e != null) {
// 更新
K k;
if ((k = e.key) == key ||
(e.hash == hash && key.equals(k))) {
oldValue = e.value;
if (!onlyIfAbsent) {
e.value = value;
++modCount;
}
break;
}
e = e.next;
}
else {
// 新增
// 1) 之前等待锁时, node 已经被创建, next 指向链表头
if (node != null)
node.setNext(first);
else
// 2) 创建新 node
node = new HashEntry<K,V>(hash, key, value, first);
int c = count + 1;
// 3) 扩容
if (c > threshold && tab.length < MAXIMUM_CAPACITY)
rehash(node);
else
// 将 node 作为链表头
setEntryAt(tab, index, node);
++modCount;
count = c;
oldValue = null;
break;
}
}
} finally {
unlock();
}
return oldValue;
}
rehash 流程,也就是扩容流程
private void rehash(HashEntry<K,V> node) {
HashEntry<K,V>[] oldTable = table;
int oldCapacity = oldTable.length;
int newCapacity = oldCapacity << 1;
threshold = (int)(newCapacity * loadFactor);
HashEntry<K,V>[] newTable =
(HashEntry<K,V>[]) new HashEntry[newCapacity];
int sizeMask = newCapacity - 1;
for (int i = 0; i < oldCapacity ; i++) {
HashEntry<K,V> e = oldTable[i];
if (e != null) {
HashEntry<K,V> next = e.next;
int idx = e.hash & sizeMask;
if (next == null) // Single node on list
newTable[idx] = e;
else { // Reuse consecutive sequence at same slot
HashEntry<K,V> lastRun = e;
int lastIdx = idx;
// 过一遍链表, 尽可能把 rehash 后 idx 不变的节点重用
for (HashEntry<K,V> last = next;
last != null;
last = last.next) {
int k = last.hash & sizeMask;
if (k != lastIdx) {
lastIdx = k;
lastRun = last;
}
}
newTable[lastIdx] = lastRun;
// 剩余节点需要新建
for (HashEntry<K,V> p = e; p != lastRun; p = p.next) {
V v = p.value;
int h = p.hash;
int k = h & sizeMask;
HashEntry<K,V> n = newTable[k];
newTable[k] = new HashEntry<K,V>(h, p.key, v, n);
}
}
}
}
// 扩容完成, 才加入新的节点
int nodeIndex = node.hash & sizeMask; // add the new node
node.setNext(newTable[nodeIndex]);
newTable[nodeIndex] = node;
// 替换为新的 HashEntry table
table = newTable;
}
get流程:get 时并未加锁,用了 UNSAFE 方法保证了可见性,扩容过程中,get 先发生就从旧表取内容,get 后发生就从新 表取内容
public V get(Object key) {
Segment<K,V> s; // manually integrate access methods to reduce overhead
HashEntry<K,V>[] tab;
int h = hash(key);
// u 为 segment 对象在数组中的偏移量
long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
// s 即为 segment
if ((s = (Segment<K,V>)UNSAFE.getObjectVolatile(segments, u)) != null &&
(tab = s.table) != null) {
for (HashEntry<K,V> e = (HashEntry<K,V>) UNSAFE.getObjectVolatile
(tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE);
e != null; e = e.next) {
K k;
if ((k = e.key) == key || (e.hash == h && key.equals(k)))
return e.value;
}
}
return null;
}
size获取流程:计算元素个数前,先不加锁计算两次,如果前后两次结果如一样,认为个数正确返回 ;如果不一样,进行重试,重试次数超过 3,将所有 segment 锁住,重新计算个数返回
public int size() {
// Try a few times to get accurate count. On failure due to
// continuous async changes in table, resort to locking.
final Segment<K,V>[] segments = this.segments;
int size;
boolean overflow; // true if size overflows 32 bits
long sum; // sum of modCounts
long last = 0L; // previous sum
int retries = -1; // first iteration isn't retry
try {
for (;;) {
if (retries++ == RETRIES_BEFORE_LOCK) {
// 超过重试次数, 需要创建所有 segment 并加锁
for (int j = 0; j < segments.length; ++j)
ensureSegment(j).lock(); // force creation
}
sum = 0L;
size = 0;
overflow = false;
for (int j = 0; j < segments.length; ++j) {
Segment<K,V> seg = segmentAt(segments, j);
if (seg != null) {
sum += seg.modCount;
int c = seg.count;
if (c < 0 || (size += c) < 0)
overflow = true;
}
}
if (sum == last)
break;
last = sum;
}
} finally {
if (retries > RETRIES_BEFORE_LOCK) {
for (int j = 0; j < segments.length; ++j)
segmentAt(segments, j).unlock();
}
}
return overflow ? Integer.MAX_VALUE : size;
}
9.2.LinkedBlockingQueue
LinkedBlockingQueue是 Java 并发包(java.util.concurrent)中基于链表实现的阻塞队列,它支持多线程环境下的高效元素存取,并通过显式锁和条件变量保证线程安全,同时具备阻塞特性(当队列满时写入阻塞,队列空时读取阻塞)。以下是其详细解析:
9.2.1.核心特性
-
有界 / 无界特性:
- 构造时可指定容量(
new LinkedBlockingQueue(int capacity)),此时为有界队列(容量固定,默认Integer.MAX_VALUE,约 20 亿,可视为无界)。 - 若不指定容量,默认容量为
Integer.MAX_VALUE,通常称为 “无界队列”(但本质是极大的有界队列)。
- 构造时可指定容量(
-
阻塞行为:
- 当队列满时,
put()方法会阻塞写入线程,直到队列有空闲空间。 - 当队列空时,
take()方法会阻塞读取线程,直到队列有元素。
- 当队列满时,
-
FIFO 顺序:基于链表实现,元素按 “先进先出” 顺序存取,类似链表的头尾操作。
-
线程安全:通过两把独立锁(分别控制入队和出队)和条件变量实现,支持多线程并发读写。
9.2.2.底层结构
LinkedBlockingQueue 的底层是一个单向链表,核心内部类和字段如下:
public class LinkedBlockingQueue<E> extends AbstractQueue<E>
implements BlockingQueue<E>, Serializable {
// 链表节点(存储元素和后继节点)
static class Node<E> {
E item;
Node<E> next;
Node(E x) { item = x; }
}
// 队列容量(若为Integer.MAX_VALUE则视为无界)
private final int capacity;
// 队列元素数量(通过AtomicInteger保证原子性)
private final AtomicInteger count = new AtomicInteger();
// 头节点(队首):head.item始终为null,头节点的next指向第一个实际元素
transient Node<E> head;
// 尾节点(队尾):last.next始终为null,新元素添加到last.next
private transient Node<E> last;
// 出队锁(控制take/poll等读操作)
private final ReentrantLock takeLock = new ReentrantLock();
// 读线程等待条件(队列空时,读线程在此等待)
private final Condition notEmpty = takeLock.newCondition();
// 入队锁(控制put/add等写操作)
private final ReentrantLock putLock = new ReentrantLock();
// 写线程等待条件(队列满时,写线程在此等待)
private final Condition notFull = putLock.newCondition();
}
- 链表结构:
head为头节点(哨兵节点,无实际数据),last为尾节点,元素通过next指针串联。 - 双锁设计:
takeLock(读锁)和putLock(写锁)相互独立,允许读和写操作并行执行(如一个线程入队的同时,另一个线程出队),提升并发效率。 - 条件变量:
notEmpty用于唤醒等待的读线程(当队列有元素时),notFull用于唤醒等待的写线程(当队列有空间时)。
9.2.3.核心操作原理
入队操作(put(E e))
public void put(E e) throws InterruptedException {
if (e == null) throw new NullPointerException();
int c = -1;
Node<E> node = new Node<E>(e);
final ReentrantLock putLock = this.putLock;
final AtomicInteger count = this.count;
putLock.lockInterruptibly(); // 获取入队锁(可被中断)
try {
// 若队列满,写线程在notFull条件上等待
while (count.get() == capacity) {
notFull.await();
}
enqueue(node); // 将节点添加到队尾
c = count.getAndIncrement(); // 元素数量+1(原子操作)
// 若添加后仍有空间,唤醒其他等待的写线程
if (c + 1 < capacity) {
notFull.signal();
}
} finally {
putLock.unlock(); // 释放入队锁
}
// 若添加前队列为空,唤醒等待的读线程
if (c == 0) {
signalNotEmpty(); // 内部会获取takeLock并唤醒notEmpty条件
}
}
// 入队核心:将节点添加到尾节点后
private void enqueue(Node<E> node) {
last = last.next = node; // 尾节点的next指向新节点,更新last为新节点
}
- 步骤:获取入队锁 → 若队列满则等待 → 入队 → 更新计数 → 释放锁 → 必要时唤醒读 / 写线程。
- 并发安全:入队操作仅需持有
putLock,其他写线程需排队,但读线程可同时执行(因读锁独立)。
出队操作(take())
public E take() throws InterruptedException {
E x;
int c = -1;
final AtomicInteger count = this.count;
final ReentrantLock takeLock = this.takeLock;
takeLock.lockInterruptibly(); // 获取出队锁(可被中断)
try {
// 若队列空,读线程在notEmpty条件上等待
while (count.get() == 0) {
notEmpty.await();
}
x = dequeue(); // 从队首移除节点并返回元素
c = count.getAndDecrement(); // 元素数量-1(原子操作)
// 若移除后仍有元素,唤醒其他等待的读线程
if (c > 1) {
notEmpty.signal();
}
} finally {
takeLock.unlock(); // 释放出队锁
}
// 若移除前队列满,唤醒等待的写线程
if (c == capacity) {
signalNotFull(); // 内部会获取putLock并唤醒notFull条件
}
return x;
}
// 出队核心:从队首移除节点
private E dequeue() {
Node<E> h = head;
Node<E> first = h.next; // 第一个实际元素节点
h.next = h; // 帮助GC
head = first; // 更新头节点为first
E x = first.item;
first.item = null; // 头节点item置空(保持哨兵特性)
return x;
}
- 步骤:获取出队锁 → 若队列空则等待 → 出队 → 更新计数 → 释放锁 → 必要时唤醒读 / 写线程。
- 并发安全:出队操作仅需持有
takeLock,与入队操作的锁独立,支持并行。
非阻塞操作(offer()/poll())
offer(E e):尝试入队,若队列满则直接返回false(不阻塞)。poll():尝试出队,若队列空则直接返回null(不阻塞)。- 原理类似
put()/take(),但不进入条件等待,直接返回结果。
9.2.4.LinkedBlockingQueue 与 ArrayBlockingQueue 的性能比较
9.2.5.LinkedBlockingQueue 与 ConcurrentLinkedQueue的性能比较
ConcurrentLinkedQueue和LinkedBlockingQueue非常像
也是两把锁,同一时刻允许两个线程同时执行
dummy节点的引入让两把锁可以锁住不同的对象,避免竞争
不通用的是ConcurrentLinkedQueue的锁是通过CAS实现的,LinkedBlockingQueue是通过可重入锁ReentrantLock实现的
9.3.CopyOnWriteArrayList
CopyOnWriteArrayList是 Java 并发包(java.util.concurrent)中提供的线程安全的 List 实现,其核心设计思想是 “写时复制”(Copy-On-Write):当对列表进行修改操作(如添加、删除、修改元素)时,会先复制一份底层数组,在新数组上完成修改,再将引用指向新数组;而读操作直接访问原数组,无需加锁。这种机制实现了读操作的无锁化和线程安全,同时简化了并发控制。
9.3.1.核心特性
- 线程安全:通过 “写时复制” 机制,避免了读操作和写操作之间的并发冲突,无需加锁即可保证读的安全性。
- 读写分离:读操作直接访问原数组,写操作在复制的新数组上进行,读写互不阻塞(读操作不会被写操作阻塞,反之亦然)。
- 最终一致性:读操作可能读取到旧版本的数据(因为写操作修改的是新数组,引用切换前旧数组仍可被访问),但最终会看到最新结果。
- 不支持
null元素:与ArrayList不同,CopyOnWriteArrayList不允许存储null(避免在并发场景下判断null时的歧义)。
9.3.2.底层结构
public class CopyOnWriteArrayList<E> implements List<E>, RandomAccess, Cloneable, Serializable {
// 全局锁,保证写操作的原子性(所有修改操作需获取此锁)
final transient ReentrantLock lock = new ReentrantLock();
// 存储元素的底层数组(volatile 修饰,保证多线程间的可见性)
private transient volatile Object[] array;
// 获取当前数组(提供给读操作)
final Object[] getArray() {
return array;
}
// 设置新数组(写操作完成后更新引用)
final void setArray(Object[] a) {
array = a;
}
}
array数组:volatile修饰确保当数组引用被修改(指向新数组)时,所有线程能立即看到最新的数组引用。lock锁:所有写操作(add、remove等)必须先获取此锁,保证同一时间只有一个线程执行修改操作,避免多线程复制数组导致的混乱。
9.3.3.核心操作原理
读操作(get(int index))
public E get(int index) {
return get(getArray(), index); // 直接访问当前数组,无锁
}
private E get(Object[] a, int index) {
return (E) a[index];
}
- 无锁化:读操作直接获取当前
array数组(通过getArray()),然后访问指定索引的元素,整个过程无需加锁,效率极高。 - 可能读取旧数据:如果读操作执行时,有写操作正在进行(已复制新数组但未更新
array引用),读操作会继续访问旧数组,因此可能读到修改前的数据(最终一致性)。
写操作(add(E e))
public boolean add(E e) {
final ReentrantLock lock = this.lock;
lock.lock(); // 加锁,确保唯一线程执行修改
try {
Object[] elements = getArray(); // 获取当前数组
int len = elements.length;
// 复制新数组(长度+1)
Object[] newElements = Arrays.copyOf(elements, len + 1);
newElements[len] = e; // 在新数组中添加元素
setArray(newElements); // 更新数组引用为新数组
return true;
} finally {
lock.unlock(); // 释放锁
}
}
- 步骤:
- 获取全局锁,确保独占修改权。
- 复制当前数组到新数组(长度 + 1)。
- 在新数组中完成添加操作。
- 将
array引用指向新数组(volatile保证可见性)。 - 释放锁。
- 写操作开销大:每次修改都需要复制整个数组,当数组元素较多时,会消耗大量内存和 CPU 资源,且可能触发 GC。
迭代操作(iterator())
CopyOnWriteArrayList 的迭代器是不可修改的(不支持 remove、add 等操作,否则抛 UnsupportedOperationException),且基于迭代器创建时的数组快照进行遍历:
public Iterator<E> iterator() {
return new COWIterator<E>(getArray(), 0); // 传入当前数组的快照
}
private static class COWIterator<E> implements ListIterator<E> {
private final Object[] snapshot; // 迭代器创建时的数组快照
private int cursor;
private COWIterator(Object[] elements, int initialCursor) {
cursor = initialCursor;
snapshot = elements; // 保存当前数组的引用
}
public E next() {
// 遍历快照数组
if (!hasNext()) throw new NoSuchElementException();
return (E) snapshot[cursor++];
}
}
- 快照特性:迭代器一旦创建,就基于当时的数组快照遍历,后续对列表的修改(如
add、remove)不会影响迭代器的遍历结果(因为修改的是新数组)。 - 无并发异常:避免了
ArrayList迭代时的ConcurrentModificationException(快速失败机制),但可能遍历到旧数据。
9.3.4.优缺点与适用场景
优点:
- 读操作高效:无锁设计,适合读多写少的场景,读操作性能远高于
Collections.synchronizedList(后者读操作也需加锁)。 - 线程安全简单:无需手动同步,底层通过 “写时复制” 和锁机制保证线程安全。
- 迭代稳定:迭代器不会抛出
ConcurrentModificationException,适合需要长时间遍历的场景。
缺点:
- 写操作开销大:每次修改都需复制整个数组,内存占用翻倍,且大数组复制耗时。
- 数据一致性弱:读操作可能读取到旧数据,不适合需要强一致性的场景。
- 内存占用高:复制数组时会同时存在新旧两个数组,高并发写场景下可能导致内存溢出(OOM)。
适用场景:
- 读多写少的场景(如配置缓存、静态数据列表):例如系统启动时加载配置项,之后很少修改,但频繁读取。
- 对数据一致性要求不高的场景:允许短暂的新旧数据不一致,最终能同步即可。
- 避免迭代并发异常的场景:需要安全遍历,不希望因修改导致迭代失败。

被折叠的 条评论
为什么被折叠?



