/*
*
*
*
*
*
* Written by Doug Lea with assistance from members of JCP JSR-166
* Expert Group and released to the public domain, as explained at
* http://creativecommons.org/publicdomain/zero/1.0/
*/
package java.util.concurrent;
import java.io.Serializable;
import java.util.AbstractCollection;
import java.util.AbstractMap;
import java.util.AbstractSet;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.Comparator;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.NavigableSet;
import java.util.NoSuchElementException;
import java.util.Set;
import java.util.SortedMap;
import java.util.Spliterator;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
/**
* 可扩展的并发 ConcurrentNavigableMap 实现。
* A scalable concurrent {@link ConcurrentNavigableMap} implementation.
* 映射可以根据key的自然顺序进行排序,也可以根据创建Map时所提供的 Comparator 进行排序,
* 具体取决于使用的构造方法
* The map is sorted according to the {@linkplain Comparable natural
* ordering} of its keys, or by a {@link Comparator} provided at map
* creation time, depending on which constructor is used.
*
* 此类实现 SkipLists 的并发变体,为 containsKey、get、put、remove 操作及其变体提供预期
* 平均 log(n) 时间开销。
* <p>This class implements a concurrent variant of <a
* href="http://en.wikipedia.org/wiki/Skip_list" target="_top">SkipLists</a>
* providing expected average <i>log(n)</i> time cost for the
* {@code containsKey}, {@code get}, {@code put} and
* {@code remove} operations and their variants. Insertion, removal,
* 多个线程可以安全地并发执行插入、移除、更新和访问操作。
* update, and access operations safely execute concurrently by
* multiple threads.
*
* Iterators and spliterators是弱一致性的 (1、next()返回的值(key / value)可能已经被删除了;2、返回的value是缓存的值,如果期间value 被修改了,
* 也不会返回最新的值。
* <p>Iterators and spliterators are
* <a href="package-summary.html#Weakly"><i>weakly consistent</i></a>.
*
* 升序key排序视图及其迭代器比降序key排序视图及其迭代器更快。
* <p>Ascending key ordered views and their iterators are faster than
* descending ones.
*
* 此类及此类视图中的方法返回的所有 Map.Entry 对,表示他们生成时的映射关系快照。
* <p>All {@code Map.Entry} pairs returned by methods in this class
* and its views represent snapshots of mappings at the time they were
* 它们不 支持 Entry.setValue 方法。
* produced. They do <em>not</em> support the {@code Entry.setValue}
* (注意,根据所需效果,可以使用 put、putIfAbsent 或 replace 更改关联映射中的映射关系。)
* method. (Note however that it is possible to change mappings in the
* associated map using {@code put}, {@code putIfAbsent}, or
* {@code replace}, depending on exactly which effect you need.)
*
* 请注意,与在大多数 collection 中不同,这里的 size 方法不是 一个常量时间操作。
* <p>Beware that, unlike in most collections, the {@code size}
* method is <em>not</em> a constant-time operation. Because of the
* 因为这些映射的异步特性,确定元素的当前数目需要遍历元素,并可能报告不正确的结果,
* 如果这个集合在遍历期间被修改。
* asynchronous nature of these maps, determining the current number
* of elements requires a traversal of the elements, and so may report
* inaccurate results if this collection is modified during traversal.
* 此外,批量操作 putAll、equals 和 clear 并不 保证能以原子方式 (atomically) 执行。
* Additionally, the bulk operations {@code putAll}, {@code equals},
* {@code toArray}, {@code containsValue}, and {@code clear} are
* <em>not</em> guaranteed to be performed atomically. For example, an
* 例如,与 putAll 操作一起并发操作的迭代器只能查看某些添加元素。
* iterator operating concurrently with a {@code putAll} operation
* might view only some of the added elements.
*
* 此类及其视图和迭代器实现 Map 和 Iterator 接口的所有可选 方法。
* <p>This class and its views and iterators implement all of the
* <em>optional</em> methods of the {@link Map} and {@link Iterator}
* interfaces. Like most other concurrent collections, this class does
* 与大多数其他并发 collection 一样,此类不 允许使用 null 键或值,因为无法可靠地
* 区分 null 返回值与不存在的元素值。
* <em>not</em> permit the use of {@code null} keys or values because some
* null return values cannot be reliably distinguished from the absence of
* elements.
*
* 此类是 Java Collections Framework 的成员。
* <p>This class is a member of the
* <a href="{@docRoot}/../technotes/guides/collections/index.html">
* Java Collections Framework</a>.
*
* @param <K> the type of keys maintained by this map
* @param <V> the type of mapped values
* @author Doug Lea
* @since 1.6
*/
/**
* 1、目前还没有有效的针对搜索树的无锁插入和删除算法。 搜索树至少有左右两个节点,没办法像列表一样,
* 通过CAS修改要删除节点的next 字段来表示该节点将要删除,从而不能在该节点进行插入操作。(因为需要
* 同时修改子节点,必须使用锁来进行多个子节点的原子性修改)
*
*/
public class ConcurrentSkipListMap<K, V> extends AbstractMap<K, V>
implements ConcurrentNavigableMap<K, V>, Cloneable, Serializable {
/*
* 这个类实现了一个类似于树的二维链接的跳跃列表,其中索引级别由不同于持有数据的
* 基本节点的单独节点表示。
* This class implements a tree-like two-dimensionally linked skip
* list in which the index levels are represented in separate
* nodes from the base nodes holding data. There are two reasons
* 采用这种方法而不是通常的基于数组的结构有两个原因:
* for taking this approach instead of the usual array-based
* 1、基于数组的实现似乎会遇到更多的复杂性和开销
* structure: 1) Array based implementations seem to encounter
* 2、我们可以使用比基链表更便宜的算法来遍历索引链表。
* more complexity and overhead 2) We can use cheaper algorithms
* for the heavily-traversed index lists than can be used for the
* 这里是一些基本的图片为一个可能的列表,有2级的索引:
* base lists. Here's a picture of some of the basics for a
* possible list with 2 levels of index:
*
* Head nodes Index nodes
* +-+ right +-+ +-+
* |2|---------------->| |--------------------->| |->null
* +-+ +-+ +-+
* | down | |
* v v v
* +-+ +-+ +-+ +-+ +-+ +-+
* |1|----------->| |->| |------>| |----------->| |------>| |->null
* +-+ +-+ +-+ +-+ +-+ +-+
* v | | | | |
* Nodes next v v v v v
* +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+
* | |->|A|->|B|->|C|->|D|->|E|->|F|->|G|->|H|->|I|->|J|->|K|->null
* +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+
*
* 基本列表使用HM链接有序set算法的变体。
* The base lists use a variant of the HM linked ordered set
* 参见Tim Harris的“非阻塞链表的实用实现”和Maged Michael的“高性能动态无锁哈希表
* 和基于列表的集合”。
* algorithm. See Tim Harris, "A pragmatic implementation of
* non-blocking linked lists"
* http://www.cl.cam.ac.uk/~tlh20/publications.html and Maged
* Michael "High Performance Dynamic Lock-Free Hash Tables and
* List-Based Sets"
* http://www.research.ibm.com/people/m/michael/pubs.htm. The
* 这些列表的基本思想是在删除时标记被删除节点的“next”指针,以避免与并发插入发生冲突,
* 并在遍历时跟踪三元组(predecessor、node、successor),以检测何时以及如何断开这些
* 被删除节点的链接。
* basic idea in these lists is to mark the "next" pointers of
* deleted nodes when deleting to avoid conflicts with concurrent
* insertions, and when traversing to keep track of triples
* (predecessor, node, successor) in order to detect when and how
* to unlink these deleted nodes.
*
* 节点使用直接的CAS'able next指针,而不是使用标记位来标记列表删除(使用
* AtomicMarkedReference可能会很慢而且占用大量空间)。
* Rather than using mark-bits to mark list deletions (which can
* be slow and space-intensive using AtomicMarkedReference), nodes
* use direct CAS'able next pointers. On deletion, instead of
* 在删除时,它们不是标记一个指针,而是拼接到另一个可以认为是标记指针的节点
* (通过使用其他不可能的字段值来指示)。
* marking a pointer, they splice in another node that can be
* thought of as standing for a marked pointer (indicating this by
* using otherwise impossible field values). Using plain nodes
* 使用普通节点的行为类似于标记指针的“boxed”实现,但是仅在删除节点时才
* 使用新节点,而不是针对每个链接。
* acts roughly like "boxed" implementations of marked pointers,
* but uses new nodes only when nodes are deleted, not for every
* 这需要更少的空间并支持更快的遍历。
* link. This requires less space and supports faster
* traversal. Even if marked references were better supported by
* 即使jvm更好地支持标记的引用,使用这种技术的遍历仍然可能更快,因为与其他方法相比,
* 任何搜索只需要多读取一个节点(检查tailing标记),而不是在每次读取时解除标记位或其他内容。
* JVMs, traversal using this technique might still be faster
* because any search need only read ahead one more node than
* otherwise required (to check for trailing marker) rather than
* unmasking mark bits or whatever on each read.
*
* 这种方法保留了HM算法中更改已删除节点的next指针所需要的基本属性,因此它的任何其他CAS都将失败,
* 但是通过将指针更改为指向另一个节点来实现这一思想,而不是通过标记它。
* This approach maintains the essential property needed in the HM
* algorithm of changing the next-pointer of a deleted node so
* that any other CAS of it will fail, but implements the idea by
* changing the pointer to point to a different node, not by
* 虽然可以通过定义不具有key/value字段的标记节点来进一步压缩空间,但是不值得额外的类型测试开销。
* marking it. While it would be possible to further squeeze
* space by defining marker nodes not to have key/value fields, it
* isn't worth the extra type-testing overhead. The deletion
* 在遍历过程中很少遇到删除标记,通常会快速地进行垃圾收集。(注意,这种技术在没有垃圾收集
* 的系统中不能很好地工作。)
* markers are rarely encountered during traversal and are
* normally quickly garbage collected. (Note that this technique
* would not work well in systems without garbage collection.)
*
* 除了使用删除标记外,列表还使用value字段的空值来表示删除,其风格类似于典型的延迟删除模式。
* In addition to using deletion markers, the lists also use
* nullness of value fields to indicate deletion, in a style
* similar to typical lazy-deletion schemes. If a node's value is
* 如果一个节点的值为null,那么它在逻辑上被删除并被忽略,即使它仍然是可访问的。
* null, then it is considered logically deleted and ignored even
* though it is still reachable. This maintains proper control of
* 这维护了对并发替换vs删除操作的适当控制——如果delete通过空字段击败了试图进行的替换,则替换必须失败,
* 并且delete必须返回字段中最后一个非空值。
* concurrent replace vs delete operations -- an attempted replace
* must fail if a delete beat it by nulling field, and a delete
* must return the last non-null value held in the field. (Note:
* (注意:这里的值字段使用Null,而不是一些特殊的标记,因为它恰好符合Map API的要求,即如果没有映射,
* 方法get将返回Null,这允许节点在被删除时仍然保持并发的可读性。在这里使用任何其他标记值会更加混乱。)
* Null, rather than some special marker, is used for value fields
* here because it just so happens to mesh with the Map API
* requirement that method get returns null if there is no
* mapping, which allows nodes to remain concurrently readable
* even when deleted. Using any other marker value here would be
* messy at best.)
*
* 这是一个删除节点n带有前继节点b和后继节点f的最初节点顺序:
* Here's the sequence of events for a deletion of node n with
* predecessor b and successor f, initially:
*
* +------+ +------+ +------+
* ... | b |------>| n |----->| f | ...
* +------+ +------+ +------+
*
* 1、 CAS更新n 的value字段从非空到空。
* 1. CAS n's value field from non-null to null.
* 从此以后,遇到节点的公共操作都不会认为这个映射存在。
* From this point on, no public operations encountering
* the node consider this mapping to exist. However, other
* 但是,其他正在进行的插入和删除可能仍然会修改n的next指针。
* ongoing insertions and deletions might still modify
* n's next pointer.
*
* 2、CAS更新n 的next 指针,指向一个新的标记节点。
* 2. CAS n's next pointer to point to a new marker node.
* 从此以后,不能将任何其他节点追加到n。
* From this point on, no other nodes can be appended to n.
* 避免了在基于case的链表中删除错误。
* which avoids deletion errors in CAS-based linked lists.
*
* +------+ +------+ +------+ +------+
* ... | b |------>| n |----->|marker|------>| f | ...
* +------+ +------+ +------+ +------+
*
* 3、CAS更新b 的next 执行,跳过n 和它的标记节点。
* 3. CAS b's next pointer over both n and its marker.
* 从此以后,不会有新的遍历会遇到节点n,并且它终于可以被垃圾回收器回收。
* From this point on, no new traversals will encounter n,
* and it can eventually be GCed.
* +------+ +------+
* ... | b |----------------------------------->| f | ...
* +------+ +------+
*
* 由于与另一个操作的竞争失败,第1步的失败将导致简单的重试。
* A failure at step 1 leads to simple retry due to a lost race
* with another operation. Steps 2-3 can fail because some other
* 步骤2-3可能会失败,因为其他一些线程注意到在遍历具有null值的节点时,通过
* 标记 和/或 取消链接来帮助解决这个问题。
* thread noticed during a traversal a node with null value and
* helped out by marking and/or unlinking. This helping-out
* 这种帮助确保没有线程会因为等待删除线程的进程而卡住。
* ensures that no thread can become stuck waiting for progress of
* the deleting thread. The use of marker nodes slightly
* 使用标记节点使帮助代码略复杂,因为遍历必须跟踪四节点的读一致(b、n、marker、f),不仅(b、n、f),
* 虽然标记节点的next字段是不可变的,一旦next字段通过CAS执行一个标记节点,它再也不会改变,所以这需要
* 更少的关心。
* complicates helping-out code because traversals must track
* consistent reads of up to four nodes (b, n, marker, f), not
* just (b, n, f), although the next field of a marker is
* immutable, and once a next field is CAS'ed to point to a
* marker, it never again changes, so this requires less care.
*
* 跳跃列表将索引添加到这个方案中,这样,基本级别的遍历开始时就接近正在查找、插入或删除的位置
* ——通常基本级别的遍历只遍历几个节点。
* Skip lists add indexing to this scheme, so that the base-level
* traversals start close to the locations being found, inserted
* or deleted -- usually base level traversals only traverse a few
* 这不会改变基本算法,除了需要确保基本遍历从 没有(在结构上)被删除的前辈(这里是b)开始,
* 否则在处理删除后重试。
* nodes. This doesn't change the basic algorithm except for the
* need to make sure base traversals start at predecessors (here,
* b) that are not (structurally) deleted, otherwise retrying
* after processing the deletion.
*
* 索引级别保持为具有volatile next字段的列表,使用CAS进行链接和取消链接。
* Index levels are maintained as lists with volatile next fields,
* using CAS to link and unlink. Races are allowed in index-list
* 在索引列表操作中允许有竞争,这些操作可能(很少)在链接新索引节点或删除索引节点时失败。
* operations that can (rarely) fail to link in a new index node
* (当然,对于数据节点我们不能这样做。)
* or delete one. (We can't do this of course for data nodes.)
* 然而,即使发生这种情况,索引列表仍然是有序的,因此正确地作为索引。
* However, even when this happens, the index lists remain sorted,
* so correctly serve as indices. This can impact performance,
* 这可能会影响性能,但是由于跳跃表是概率性的,所以最终的结果是,在争用情况下,
* 有效的“p”值可能低于其名义值。
* but since skip lists are probabilistic anyway, the net result
* is that under contention, the effective "p" value may be lower
* than its nominal value. And race windows are kept small enough
* 并且竞争窗口被保持得足够小,以至于在实践中这些失败是很少见的,即使是在很多竞争的情况下。
* that in practice these failures are rare, even under a lot of
* contention.
*
* 由于建立索引,重试(对于基本列表和索引列表)相对便宜,这一事实允许对重试逻辑进行一些简单的简化。
* The fact that retries (for both base and index lists) are
* relatively cheap due to indexing allows some minor
* simplifications of retry logic. Traversal restarts are
* 遍历重新开始是在大多数“帮助” CAS之后执行的。
* performed after most "helping-out" CASes. This isn't always
* 这并不总是严格必要的,但是隐式的回退有助于减少其他下游失败的CAS,足以抵消重新启动的成本。
* strictly necessary, but the implicit backoffs tend to help
* reduce other downstream failed CAS's enough to outweigh restart
* 这恶化了最坏的情况,但似乎改善了高度竞争的情况。
* cost. This worsens the worst case, but seems to improve even
* highly contended cases.
*
* 与大多数跳转列表实现不同,这里的索引插入和删除需要在基本级操作之后进行
* 单独的遍历,以添加或删除索引节点。
* Unlike most skip-list implementations, index insertion and
* deletion here require a separate traversal pass occurring after
* the base-level action, to add or remove index nodes. This adds
* 这增加了单线程开销,但是通过缩小干扰窗口提高了争用多线程的性能,并允许删除,
* 以确保从公共删除操作返回时所有索引节点都是不可访问的,从而避免不必要的垃圾保留。
* to single-threaded overhead, but improves contended
* multithreaded performance by narrowing interference windows,
* and allows deletion to ensure that all index nodes will be made
* unreachable upon return from a public remove operation, thus
* avoiding unwanted garbage retention. This is more important
* 这在这里比在其他一些数据结构中更重要,因为我们不能取消引用用户键的节点字段,
* 因为它们仍然可能被其他正在进行的遍历读取。
* here than in some other data structures because we cannot null
* out node fields referencing user keys since they might still be
* read by other ongoing traversals.
*
* 索引使用跳跃表参数来保持良好的搜索性能,同时使用比通常更少的索引:固定的参数k=1, p=0.5
* (参见方法doPut)意味着大约四分之一的节点有索引。
* Indexing uses skip list parameters that maintain good search
* performance while using sparser-than-usual indices: The
* hardwired parameters k=1, p=0.5 (see method doPut) mean
* that about one-quarter of the nodes have indices. Of those that
* 其中,有一半的节点只有一个级别,四分之一的节点有两个级别,以此类推(参见Pugh的Skip List Cookbook,第3.4节)。
* do, half have one level, a quarter have two, and so on (see
* Pugh's Skip List Cookbook, sec 3.4). The expected total space
* 映射的预期总空间需求略小于java.util.TreeMap的当前实现。
* requirement for a map is slightly less than for the current
* implementation of java.util.TreeMap.
*
* 改变索引的级别(即,树状结构的高度)也是使用CAS。
* Changing the level of the index (i.e, the height of the
* tree-like structure) also uses CAS. The head index has initial
* head索引的初始水平/高度为1。
* level/height of one. Creation of an index with height greater
* 创建高度大于当前级别的索引将通过对新最顶部的头部进行CAS'ing向头部索引添加一个级别。
* than the current level adds a level to the head index by
* CAS'ing on a new top-most head. To maintain good performance
* 为了在大量删除之后保持良好的性能,删除方法尝试减少高度(如果最顶层接近空的)。
* after a lot of removals, deletion methods heuristically try to
* reduce the height if the topmost levels appear to be empty.
* 这可能会遇到可能(但很少)减少和“丢失”一个等级的冲突,就像它将要包含一个索引一样(这将永远不会被遇到)。
* This may encounter races in which it possible (but rare) to
* reduce and "lose" a level just as it is about to contain an
* index (that will then never be encountered). This does no
* 这不会造成结构性损害,而且在实践中,这似乎是比放任无限制增长更好的选择。
* structural harm, and in practice appears to be a better option
* than allowing unrestrained growth of levels.
*
* 所有这些的代码比您希望的更冗长。
* The code for all this is more verbose than you'd like. Most
* 大多数操作都需要定位一个元素(或插入元素的位置)。
* operations entail locating an element (or position to insert an
* element). The code to do this can't be nicely factored out
* 这样做的代码不能很好地分解出来,因为后续的使用需要前一个节点 和/或 后一个节点 和/或
* 值字段的快照,这些字段不能一次全部返回,至少在没有创建另一个对象来保存它们之前不能返回
* because subsequent uses require a snapshot of predecessor
* and/or successor and/or value fields which can't be returned
* all at once, at least not without creating yet another object
* 对于基本的内部搜索操作来说,创建这样的小对象是一个特别糟糕的主意,因为它增加了GC开销。
* to hold them -- creating such little objects is an especially
* bad idea for basic internal search operations because it adds
* to GC overhead. (This is one of the few times I've wished Java
* (这是我希望Java有宏的少数几次之一。)相反,一些遍历代码在插入和删除操作中交错。
* had macros.) Instead, some traversal code is interleaved within
* insertion and removal operations. The control logic to handle
* 处理所有重试条件的控制逻辑有时很复杂。
* all the retry conditions is sometimes twisty. Most search is
* 大多数搜索被分成两部分。findPredecessor()只搜索索引节点,返回键的基本级predecessor。
* findNode()完成了基本级的搜索。
* broken into 2 parts. findPredecessor() searches index nodes
* only, returning a base-level predecessor of the key. findNode()
* finishes out the base-level search. Even with this factoring,
* 即使使用这种分解方法,仍然有相当数量的近乎重复的代码来处理变量。
* there is a fair amount of near-duplication of code to handle
* variants.
*
* 为了在跨线程不干扰的情况下生成随机值,我们使用了jdk线程内部的本地随机支持(通过“secondary seed”,
* 以避免与用户级ThreadLocalRandom的干扰)。
* To produce random values without interference across threads,
* we use within-JDK thread local random support (via the
* "secondary seed", to avoid interference with user-level
* ThreadLocalRandom.)
*
* 这个类的先前版本将非comparable键与其比较器包装起来,以便在使用comparators时模拟Comparables。
* A previous version of this class wrapped non-comparable keys
* with their comparators to emulate Comparables when using
* comparators vs Comparables. However, JVMs now appear to better
* 但是,现在jvm似乎可以更好地将比comparator-vs-comparable的选择注入到搜索循环中。
* handle infusing comparator-vs-comparable choice into search
* 静态方法cpr(comparator,x, y)用于所有的比较,只要比较器参数设置在循环外部
* (因此有时作为参数传递给内部方法),就可以工作得很好,以避免字段重新读取。
* loops. Static method cpr(comparator, x, y) is used for all
* comparisons, which works well as long as the comparator
* argument is set up outside of loops (thus sometimes passed as
* an argument to internal methods) to avoid field re-reads.
*
* 要了解与此算法至少共享两个特征的算法,请参阅米哈伊尔·富米切夫的论文、基尔·弗雷泽的论文和哈坎·桑德尔的论文
* For explanation of algorithms sharing at least a couple of
* features with this one, see Mikhail Fomitchev's thesis
* (http://www.cs.yorku.ca/~mikhail/), Keir Fraser's thesis
* (http://www.cl.cam.ac.uk/users/kaf24/), and Hakan Sundell's
* thesis (http://www.cs.chalmers.se/~phs/).
*
* 由于使用了类似树的索引节点,您可能想知道为什么不使用某种搜索树来代替它,
* 搜索树支持更快的搜索操作。
* Given the use of tree-like index nodes, you might wonder why
* this doesn't use some kind of search tree instead, which would
* support somewhat faster search operations. The reason is that
* 原因是目前还没有有效的针对搜索树的无锁插入和删除算法。
* there are no known efficient lock-free insertion and deletion
* algorithms for search trees. The immutability of the "down"
* 索引节点的“down”链接的不变性(与true树中可变的“left”字段相反)使此操作仅使用CAS操作即可处理。
* links of index nodes (as opposed to mutable "left" fields in
* true trees) makes this tractable using only CAS operations.
*
* 本地变量的符号指南
* Notation guide for local variables
* Node: b, n, f for predecessor, node, successor
* Index: q, r, d for index node, right, down.
* t for another index node
* Head: h
* Levels: j
* Keys: k, key
* Values: v, value
* Comparisons: c
*/
private static final long serialVersionUID = -8627078645895051609L;
/**
* 用于标识基本级header的特殊值
* Special value used to identify base-level header
*/
private static final Object BASE_HEADER = new Object();
/**
* skiplist最顶端的索引。
* The topmost head index of the skiplist.
*/
private transient volatile HeadIndex<K, V> head;
/**
* The comparator used to maintain order in this map, or null if
* using natural ordering. (Non-private to simplify access in
* nested classes.)
*
* @serial
*/
final Comparator<? super K> comparator;
/**
* 延迟初始化keySet
* Lazily initialized key set
*/
private transient KeySet<K> keySet;
/**
* 延迟初始化 entry 集合
* Lazily initialized entry set
*/
private transient EntrySet<K, V> entrySet;
/**
* 延迟初始化value 集合
* Lazily initialized values collection
*/
private transient Values<V> values;
/**
* 延迟初始化降序 key 集合
* Lazily initialized descending key set
*/
private transient ConcurrentNavigableMap<K, V> descendingMap;
/**
* 初始化或重置状态。构造方法、clone、clear、readObject 以及ConcurrentSkipListSet.clone 需要这个方法。
* Initializes or resets state. Needed by constructors, clone,
* clear, readObject. and ConcurrentSkipListSet.clone.
* (注意,comparator需要单独被初始化)
* (Note that comparator must be separately initialized.)
*/
private void initialize() {
keySet = null;
entrySet = null;
values = null;
descendingMap = null;
// 初始化head 节点, Node的value 值不能为null,null表示该Node已被删除
head = new HeadIndex<K, V>(new Node<K, V>(null, BASE_HEADER, null),
null, null, 1);
}
/**
* compareAndSet head node
*/
private boolean casHead(HeadIndex<K, V> cmp, HeadIndex<K, V> val) {
return UNSAFE.compareAndSwapObject(this, headOffset, cmp, val);
}
/* ---------------- Nodes -------------- */
/**
* 节点持有key和value,并按排序顺序单向链接,可能中间有一些介入标记节点。
* Nodes hold keys and values, and are singly linked in sorted
* order, possibly with some intervening marker nodes. The list is
* 该列表由一个可以作为head.node访问的虚拟节点作为头节点。
* headed by a dummy node accessible as head.node. The value field
* 值字段只声明为Object,因为它为标记和头节点接受特殊的非V 值。
* is declared only as Object because it takes special non-V
* values for marker and header nodes.
*/
static final class Node<K, V> {
final K key;
volatile Object value;
volatile Node<K, V> next;
/**
* 创建一个新的普通节点。
* Creates a new regular node.
*/
Node(K key, Object value, Node<K, V> next) {
this.key = key;
this.value = value;
this.next = next;
}
/**
* 创建一个新的标记节点。标记的区别在于它的值字段指向自身。
* Creates a new marker node. A marker is distinguished by
* having its value field point to itself. Marker nodes also
* 标记节点也有null键,有几个地方利用了这一事实,但这并不能将标记节点与基本级别的头节点(head.node)区分开来,
* 后者也有一个null键。
* have null keys, a fact that is exploited in a few places,
* but this doesn't distinguish markers from the base-level
* header node (head.node), which also has a null key.
*/
Node(Node<K, V> next) {
this.key = null;
this.value = this;
this.next = next;
}
/**
* CAS value 字段
* compareAndSet value field
*/
boolean casValue(Object cmp, Object val) {
return UNSAFE.compareAndSwapObject(this, valueOffset, cmp, val);
}
/**
* CAS next 字段
* compareAndSet next field
*/
boolean casNext(Node<K, V> cmp, Node<K, V> val) {
return UNSAFE.compareAndSwapObject(this, nextOffset, cmp, val);
}
/**
* 如果这个节点时一个标记节点,返回true。
* Returns true if this node is a marker. This method isn't
* 这个方法实际上并没有在当前的任何代码用来检查标记节点中被调用,因为调用者已经读取了值字段,
* 并且需要使用这个读操作(这里没有另一个操作),所以直接测试值是否指向node。
* actually called in any current code checking for markers
* because callers will have already read value field and need
* to use that read (not another done here) and so directly
* test if value points to node.
*
* @return true if this node is a marker node
*/
boolean isMarker() {
return value == this;
}
/**
* 如果此节点是基本级列表的header,则返回true。
* Returns true if this node is the header of base-level list.
*
* @return true if this node is header node
*/
boolean isBaseHeader() {
return value == BASE_HEADER;
}
/**
* 尝试追加删除标记节点到这个node.
* Tries to append a deletion marker to this node.
*
* @param f the assumed current successor of this node
* @return true if successful
*/
boolean appendMarker(Node<K, V> f) {
// new Node<K, V>(f) 创建删除标记节点, value = this, next = f
return casNext(f, new Node<K, V>(f));
}
/**
* 通过添加标记节点或从前一个节点断开链接来帮助删除。
* Helps out a deletion by appending marker or unlinking from
* 当value字段为null时,在遍历过程中调用此函数。
* predecessor. This is called during traversals when value
* field seen to be null.
*
* @param b predecessor
* @param f successor
*/
// b -> this 的前一个节点 f -> this 的后一个节点
// 帮助删除节点,需要分两个步骤: 1、插入删除标记节点 2、删除节点n 和标记节点. 每次调用只会执行一个步骤
void helpDelete(Node<K, V> b, Node<K, V> f) {
/*
* 重新检查链接,然后在每次调用时只执行一个帮助步骤,可以尽量减少帮助线程之间的CAS干扰。
* Rechecking links and then doing only one of the
* help-out stages per call tends to minimize CAS
* interference among helping threads.
*/
// 判断三节点是否一致 (判断f是否是this的后一个节点, b 是否是this 的前一个节点)
if (f == next && this == b.next) {
// 后面没有节点了 || 后一个节点不是标记节点
if (f == null || f.value != f) // not already marked 还未标记 (还未插入标记节点)
// new Node<K, V>(f) 创建一个标记节点 -> value = this , next = f
// CAS 更新next 为新创建的标记节点
casNext(f, new Node<K, V>(f));
else
// CAS更新前一个节点的next字段,跳过 this 和 标记节点
b.casNext(this, f.next);
}
}
/**
* 如果此节点包含有效的key-value对,则返回value,否则为空。
* Returns value if this node contains a valid key-value pair,
* else null.
*
* 如果这个节点不是标记节点、header,或已经被删除,则返回节点的value,否则返回null
* @return this node's value if it isn't a marker or header or
* is deleted, else null
*/
V getValidValue() {
Object v = value;
// v == this 说明是一个删除标记节点
// v == BASE_HEADER 说明是header 节点
if (v == this || v == BASE_HEADER)
return null;
// 如果该节点已经删除,那么v = null,返回结果也是null 值
@SuppressWarnings("unchecked") V vv = (V) v;
return vv;
}
/**
* 创建并返回一个持有当前映射的新的SimpleImmutableEntry(如果这个节点持有有效值,否则返回null)
* Creates and returns a new SimpleImmutableEntry holding current
* mapping if this node holds a valid value, else null.
*
* @return new entry or null
*/
AbstractMap.SimpleImmutableEntry<K, V> createSnapshot() {
Object v = value;
// 如果节点已经删除,或者是删除标记节点,或者是header节点,则此节点无效,返回null
if (v == null || v == this || v == BASE_HEADER)
return null;
@SuppressWarnings("unchecked") V vv = (V) v;
// 将key 和 value 封装到SimpleImmutableEntry,然后返回
return new AbstractMap.SimpleImmutableEntry<K, V>(key, vv);
}
// UNSAFE mechanics
private static final sun.misc.Unsafe UNSAFE;
private static final long valueOffset;
private static final long nextOffset;
static {
try {
UNSAFE = sun.misc.Unsafe.getUnsafe();
Class<?> k = Node.class;
valueOffset = UNSAFE.objectFieldOffset
(k.getDeclaredField("value"));
nextOffset = UNSAFE.objectFieldOffset
(k.getDeclaredField("next"));
} catch (Exception e) {
throw new Error(e);
}
}
}
/* ---------------- Indexing -------------- */
/**
* 索引节点表示跳跃列表的级别。请注意,尽管Node和Index都有向前指向的字段,
* 但它们具有不同的类型,处理方式也不同,因此不能通过将字段放在共享抽象类中
* 来很好地捕获它们。
* Index nodes represent the levels of the skip list. Note that
* even though both Nodes and Indexes have forward-pointing
* fields, they have different types and are handled in different
* ways, that can't nicely be captured by placing field in a
* shared abstract class.
*/
static class Index<K, V> {
final Node<K, V> node;
final Index<K, V> down;
volatile Index<K, V> right;
/**
* 使用给定的值创建索引节点
* Creates index node with given values.
*/
Index(Node<K, V> node, Index<K, V> down, Index<K, V> right) {
this.node = node;
this.down = down;
this.right = right;
}
/**
* CAS right 字段
* compareAndSet right field
*/
final boolean casRight(Index<K, V> cmp, Index<K, V> val) {
return UNSAFE.compareAndSwapObject(this, rightOffset, cmp, val);
}
/**
* 如果此索引的节点已被删除,则返回true。
* Returns true if the node this indexes has been deleted.
*
* @return true if indexed node is known to be deleted
*/
final boolean indexesDeletedNode() {
return node.value == null;
}
/**
* 尝试CAS newSucc作为后继节点。
* Tries to CAS newSucc as successor. To minimize races with
* 为了最小化可能丢失此索引节点的解除链接的竞争,如果已知被索引的节点已被删除,
* 则它不会尝试进行链接。
* unlink that may lose this index node, if the node being
* indexed is known to be deleted, it doesn't try to link in.
*
* @param succ the expected current successor
* @param newSucc the new successor
* @return true if successful
*/
final boolean link(Index<K, V> succ, Index<K, V> newSucc) {
Node<K, V> n = node;
// 新的后继节点的右节点为旧的后继节点
newSucc.right = succ;
// 如果已知被索引的节点已被删除, 则它不会尝试进行链接。
// CAS更新right
return n.value != null && casRight(succ, newSucc);
}
/**
* 尝试CAS right 字段跳过明显的后继节点succ。
* Tries to CAS right field to skip over apparent successor
* succ. Fails (forcing a retraversal by caller) if this node
* 如果已知此节点已被删除,则失败(强制调用者重新遍历)。
* is known to be deleted.
*
* @param succ the expected current successor
* @return true if successful
*/
final boolean unlink(Index<K, V> succ) {
// node.value != null 判断node 节点未被删除(最小化竞争,如果该索引节点在这时候被删除了,
// 则修改该索引节点的right 也不会影响索引列表)
// CAS 修改right 字段
return node.value != null && casRight(succ, succ.right);
}
// Unsafe mechanics
private static final sun.misc.Unsafe UNSAFE;
private static final long rightOffset;
static {
try {
UNSAFE = sun.misc.Unsafe.getUnsafe();
Class<?> k = Index.class;
rightOffset = UNSAFE.objectFieldOffset
(k.getDeclaredField("right"));
} catch (Exception e) {
throw new Error(e);
}
}
}
/* ---------------- Head nodes -------------- */
/**
* 每个层的head节点跟踪它们的层级。
* Nodes heading each level keep track of their level.
*/
static final class HeadIndex<K, V> extends Index<K, V> {
final int level;
HeadIndex(Node<K, V> node, Index<K, V> down, Index<K, V> right, int level) {
super(node, down, right);
this.level = level;
}
}
/* ---------------- Comparison utilities -------------- */
/**
* 使用comparator 进行比较,或者comparator 为null,比较自然顺序。
* Compares using comparator or natural ordering if null.
* 仅由已执行所需类型检查的方法调用。
* Called only by methods that have performed required type checks.
*/
@SuppressWarnings({"unchecked", "rawtypes"})
static final int cpr(Comparator c, Object x, Object y) {
return (c != null) ? c.compare(x, y) : ((Comparable) x).compareTo(y);
}
/* ---------------- Traversal -------------- */
/**
* 返回一个键值严格小于给定键值的基本级节点(索引节点上的数据节点),如果没有这样的节点,则返回基本级header节点。
* Returns a base-level node with key strictly less than given key,
* or the base-level header if there is no such node. Also
* 并且解除沿途发现的已删除节点的索引链接。
* unlinks indexes to deleted nodes found along the way. Callers
* 调用者依赖于清除到删除节点的索引的副作用。
* rely on this side-effect of clearing indices to deleted nodes.
*
* @param key the key
* @return a predecessor of key
*/
// 如果只有head 一个节点,那么返回 head.node
// 查找小于key 的索引节点上的数据节点,并删除无效的索引节点
private Node<K, V> findPredecessor(Object key, Comparator<? super K> cmp) {
if (key == null)
throw new NullPointerException(); // don't postpone errors 不要推迟错误
for (; ; ) {
// q, r, d -> index node, right, down.
for (Index<K, V> q = head, r = q.right, d; ; ) {
if (r != null) {
Node<K, V> n = r.node;
K k = n.key;
// n.value == null 说明n 节点已被删除,索引无效
if (n.value == null) {
if (!q.unlink(r))
// 如果删除r 索引失败,那么重新遍历 (在实践中这些失败是很少见的)
break; // restart 重新遍历
// 读取下一个索引节点
r = q.right; // reread r 重新读取r
continue;
}
// 比较给定的key 值 和 该索引的k 值
if (cpr(cmp, key, k) > 0) {
// key 大于索引的k 值,继续查找下一个索引
q = r;
r = r.right;
continue;
}
}
// ---- q.node.key(可能是头节点) < key <= r.node.key ----
// 查找下一级索引
if ((d = q.down) == null)
// 达到最后一级索引,返回该索引的节点
// note: 这里返回的是q.node ,不是r.node (q.node.key(可能是头节点) < key)
return q.node;
// q 和 r 赋值为下一级索引的值
q = d;
r = d.right;
}
}
}
/**
* 返回持有key的节点,如果没有,则返回null,清除沿途看到的所有已删除节点。
* Returns node holding key or null if no such, clearing out any
* deleted nodes seen along the way. Repeatedly traverses at
* 从基本层级重复遍历,从findPredecessor返回的predecessor处开始查找key,
* 处理遇到的基级删除。
* base-level looking for key starting at predecessor returned
* from findPredecessor, processing base-level deletions as
* 一些调用者依赖于清除删除节点的副作用。
* encountered. Some callers rely on this side-effect of clearing
* deleted nodes.
* <p>
* 重新开始,如果在以节点n为中心的遍历步骤中:
* Restarts occur, at traversal step centered on node n, if:
* <p>
* (1)读取n的next字段后,n不再假设前一节点b的当前后继节点,这意味着我们没有一致的3节点快照,
* 因此无法取消遇到的任何后续删除节点的链接。
* (1) After reading n's next field, n is no longer assumed
* predecessor b's current successor, which means that
* we don't have a consistent 3-node snapshot and so cannot
* unlink any subsequent deleted nodes encountered.
* <p>
* (2) n的value字段为null,表示删除了n,在这种情况下,我们在重试之前帮助正在进行的结构删除。
* (2) n's value field is null, indicating n is deleted, in
* which case we help out an ongoing structural deletion
* before retrying. Even though there are cases where such
* 即使在某些情况下,这样的断开连接不需要重新启动,但在这里并没有对它们进行分类,
* 因为这样做通常不会超过重新启动的成本。
* unlinking doesn't require restart, t