散列表(Hash table,也叫哈希表),是根据关键码值(Key value)而直接进行访问的数据结构。也就是说,它通过把关键码值映射到表中一个位置来访问记录,以加快查找的速度。这个映射函数叫做散列函数,存放记录的数组叫做散列表。
目录:
1.什么是hash表
2.HashMap实现原理
3.HashMap的特性
4.总结
一,什么是哈希表
哈希表是一种通过关键码值(key value)直接进行访问的数据结构。因为在数组中,通过下标可直接确定数组元素,所以哈希表利用这一特性,将关键字利用某个函数映射到数组的位置中。
关系可描述为:
位置=F(关键字),F为哈希函数
二:HashMap实现原理
1.内部类
(1)Node类,实现了Map.entry<K,V>接口,是hashMap得基本组成单元,本质是一个映射
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
V value;
Node<K,V> next;
//构造函数的hash值,键,值,下一节点
Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
}
public final K getKey() { return key; }
public final V getValue() { return value; }
public final String toString() { return key + "=" + value; }
public final int hashCode() {
return Objects.hashCode(key) ^ Objects.hashCode(value);
}
public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
}
public final boolean equals(Object o) {
if (o == this)
return true;
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>)o;
if (Objects.equals(key, e.getKey()) &&
Objects.equals(value, e.getValue()))
return true;
}
return false;
}
}
(2)红黑树,内容过多,另开一篇
static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {
TreeNode<K,V> parent; // 父节点
TreeNode<K,V> left;//左子树
TreeNode<K,V> right;//右子树
TreeNode<K,V> prev; // 兄弟节点
boolean red;
TreeNode(int hash, K key, V val, Node<K,V> next) {
super(hash, key, val, next);
}
/**
* Returns root of tree containing this node.
*/
final TreeNode<K,V> root() {
for (TreeNode<K,V> r = this, p;;) {
if ((p = r.parent) == null)
return r;
r = p;
}
}
(3)位桶,hashMap的主干
transient Node<K,V>[] table;
以上就是hashMap里的基本数据结构,hashMap主要实现方式为数组加链表加红黑树。具体实现下面详细给出。
2.hashMap详解
(1)主要成员值
//初始容器大小为16,必须为2的次方
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
//容器最大容量
static final int MAXIMUM_CAPACITY = 1 << 30;
//初始负载因子为0.75
static final float DEFAULT_LOAD_FACTOR = 0.75f;
//链表转成红黑树的临界值为8
static final int TREEIFY_THRESHOLD = 8;
//收缩检测,红黑树转成链表的临界值
static final int UNTREEIFY_THRESHOLD = 6;
//最小树形化容量阈值,个人理解是平衡树形化和扩容数组之间的开销,当数组长度小于该值时,优先进行扩容
static final int MIN_TREEIFY_CAPACITY = 64;
//
(2)构造方法
I:
public HashMap(int initialCapacity, float loadFactor) {
//判断初始数组容量是否合法
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal initial capacity: " +
initialCapacity);
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
//判断初始负载因子是否合法
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " +
loadFactor);
this.loadFactor = loadFactor;
this.threshold = tableSizeFor(initialCapacity);
}
II:
public HashMap(Map<? extends K, ? extends V> m) {
this.loadFactor = DEFAULT_LOAD_FACTOR;
putMapEntries(m, false);//将map m里的元素添加到hashMap里
}
tableSizeFor方法
static final int tableSizeFor(int cap) {
int n = cap - 1;
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}//返回最接近输入值的2的整数次幂的数
putMapEntries方法
final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
int s = m.size();
if (s > 0) {
//判断是否进行初始化
if (table == null) { // pre-size
//取map实际长度除以负载因子+1,可以减少一次扩容
float ft = ((float)s / loadFactor) + 1.0F;
int t = ((ft < (float)MAXIMUM_CAPACITY) ?
(int)ft : MAXIMUM_CAPACITY);
if (t > threshold)
threshold = tableSizeFor(t);
}
//若已初始化,并且数组元素个数大于阈值,进行扩容
else if (s > threshold)
resize();
for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
K key = e.getKey();
V value = e.getValue();
putVal(hash(key), key, value, false, evict);
}
}
}
构造方法有四个,差异在数组容量和负载因子上。
(3)主要方法之put方法
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
//调用putVal,对key进行hash运算
}
putVal方法:有点复杂,画个流程图
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
//数组为空调用resize()方法进行扩容
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;
//利用hash值和数组长度计算插入位置,若该位置为空,则插入
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
else {
Node<K,V> e; K k;
//若插入位置存在值,判断key是否相等,相等则覆盖
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
e = p;
//插入位置存在值并且key不相等,判断是否为树,是的话putTreeVal
else if (p instanceof TreeNode)
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
else {
//遍历节点链表
for (int binCount = 0; ; ++binCount) {
//遍历链表至最后节点,无相同key,在链表最后插入
if ((e = p.next) == null) {
p.next = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash);
break;
}
//链表存在相同key,覆盖
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;
}
}
//存在覆盖时,返回旧值
if (e != null) { // existing mapping for key
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null)
e.value = value;
afterNodeAccess(e);
return oldValue;
}
}
//fail-fast实现
++modCount;
//判断是否需要扩容
if (++size > threshold)
resize();
afterNodeInsertion(evict);
return null;
}
tip:当使用迭代器遍历hashMap的时候,发现这个对象的modCount和迭代器中存储的expectedModCount不一样时就会抛出异常,这可以说明hashMao不是线程安全的
(2)hashMap主要方法之treeifyBin
final void treeifyBin(Node<K,V>[] tab, int hash) {
int n, index; Node<K,V> e;
//判断数组是否初始化或者数组长度是否小于MIN_TREEIFY_CAPACITY
//将原先单向链表转换为双向链表
if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
resize();
else if ((e = tab[index = (n - 1) & hash]) != null) {
TreeNode<K,V> hd = null, tl = null;
do {
TreeNode<K,V> p = replacementTreeNode(e, null);
if (tl == null)
hd = p;
else {
p.prev = tl;
tl.next = p;
}
tl = p;
} while ((e = e.next) != null);
if ((tab[index] = hd) != null)
hd.treeify(tab);
}
}
//treeify方法
//treeify方法
final void treeify(Node<K,V>[] tab) {
TreeNode<K,V> root = null;
for (TreeNode<K,V> x = this, next; x != null; x = next) {
next = (TreeNode<K,V>)x.next;
x.left = x.right = null;
if (root == null) {
x.parent = null;
x.red = false;
root = x;
}
else {
K k = x.key;
int h = x.hash;
Class<?> kc = null;
for (TreeNode<K,V> p = root;;) {
int dir, ph;
K pk = p.key;
if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0)
dir = tieBreakOrder(k, pk);
TreeNode<K,V> xp = p;
if ((p = (dir <= 0) ? p.left : p.right) == null) {
x.parent = xp;
if (dir <= 0)
xp.left = x;
else
xp.right = x;
root = balanceInsertion(root, x);
break;
}
}
}
}
moveRootToFront(tab, root);
}
总结:
Q:为什么TREEIFY_THRESHOLD 和 UNTREEIFY_THRESHOLD 是8和6?
A:TREEIFY_THRESHOLD为8的原因:根据泊松分布概率质量函数,一个哈希桶达到 9 个元素的概率小于一千万分之一. 选定阈值为 8,退化阈值为6:红黑树必链表占用更大的内存空间且在元素较少时不一定有更好的表现。