List:
public interface List<E> extends Collection<E>
有序的,可以通过索引来访问和遍历元素。常用的实现类为ArrayList和LinkedList
ArrayList:
public class ArrayList<E> extends AbstractList<E>
implements List<E>, RandomAccess, Cloneable, java.io.Serializable
是一个数组队列,动态数组,它的容量能够动态增长。
数据结构为线表性访问速度非常快。插入和删除慢
序列化id
private static final long serialVersionUID = 8683452581122892189L;
容器默认初始化大小
private static final int DEFAULT_CAPACITY = 10;
一个空对象
private static final Object[] EMPTY_ELEMENTDATA = {};
一个空对象,如果使用默认构造函数创建ArrayList,则默认对象内容是该值
private static final Object[] DEFAULTCAPACITY_EMPTY_ELEMENTDATA = {};
ArrayList存放对象的容器,后面的添加、删除等操作都是基于该属性来进行操作
transient Object[] elementData;
当前列表已使用的长度
private int size;
数组最大长度
private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
这个是从AbstractList继承过来的,代表ArrayList集合修改的次数
protected transient int modCount = 0;
LinkedList:
public class LinkedList<E>
extends AbstractSequentialList<E>
implements List<E>, Deque<E>, Cloneable, java.io.Serializable
是一个双向链表
数据结构为链表,查找效率低,删除增加效率高
Set:
public interface Set<E> extends Collection<E>
集合中的对象无序;并且不重复。常用的实现类为ThreeSet和HashSet
HashSet
public class HashSet<E>
extends AbstractSet<E>
implements Set<E>, Cloneable, java.io.Serializable
{
static final long serialVersionUID = -5024744406713321676L;
private transient HashMap<E,Object> map;
// Dummy value to associate with an Object in the backing Map
private static final Object PRESENT = new Object();
/**
* Constructs a new, empty set; the backing <tt>HashMap</tt> instance has
* default initial capacity (16) and load factor (0.75).
*/
public HashSet() {
map = new HashMap<>();
}
可以看出HashSet底层是HashMap,但是二者区别很大
HashSet实现Set接口;不允许有重复的值;存储的是对象本省;使用add存放元素
HsahMap实现Map接口;不允许键重复;存储的是键值对;使用put存放元素
TreeSet
public class TreeSet<E> extends AbstractSet<E>
implements NavigableSet<E>, Cloneable, java.io.Serializable
{
public interface NavigableSet<E> extends SortedSet<E> {`
实现了SortedSet可以排序
Map
public interface Map<K,V> {
使用键值对存储。常用实现类为HaspMap、HashTable、TreeMap、SortedMap。
HashMap
数组+链表/红黑树
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
初始化大小为16
初始化时不开辟空间,只有存储数据时才开辟空间
static final float DEFAULT_LOAD_FACTOR = 0.75f;
默认的负载因子
当一个Map的存储量达到75%时;HashMap就会扩容
初始化时不开辟空间,只有put时才开辟空间
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
}
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
else {
Node<K,V> e; K k;
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
e = p;
else if (p instanceof TreeNode)
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
else {
for (int binCount = 0; ; ++binCount) {
if ((e = p.next) == null) {
p.next = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash);
break;
}
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;
}
}
if (e != null) { // existing mapping for key
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null)
e.value = value;
afterNodeAccess(e);
return oldValue;
}
}
++modCount;
if (++size > threshold)
resize();
afterNodeInsertion(evict);
return null;
}
final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
if (oldCap > 0) {
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
}
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // double threshold
}
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
threshold = newThr;
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;
if (oldTab != null) {
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
if ((e = oldTab[j]) != null) {
oldTab[j] = null;
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode)
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else { // preserve order
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
扩容机制就是膨胀一倍,重新计算每个元素在数组中的位置
HashMap解决冲突的四种方法:1、开放地址法 2、拉链法 3、再哈希 4、建立公共溢出区
HashTable
存储方式和冲突解决和HashMap一致
public Hashtable() {
this(11, 0.75f);
初始化大小为11;负载因子为75%
protected void rehash() {
int oldCapacity = table.length;
Entry<?,?>[] oldMap = table;
// overflow-conscious code
int newCapacity = (oldCapacity << 1) + 1;
if (newCapacity - MAX_ARRAY_SIZE > 0) {
if (oldCapacity == MAX_ARRAY_SIZE)
// Keep running with MAX_ARRAY_SIZE buckets
return;
newCapacity = MAX_ARRAY_SIZE;
}
Entry<?,?>[] newMap = new Entry<?,?>[newCapacity];
modCount++;
threshold = (int)Math.min(newCapacity * loadFactor, MAX_ARRAY_SIZE + 1);
table = newMap;
HashTable扩容
HashTable和HashMap的区别:
HashTable实现了synchronized接口为线程安全的;HashMap不是线程安全的
HashTable初始化大小为11;HashMap初始化大小为16
HashTable不予许键值对为空;HashMap允许键值对为空(key不能重复;所以只能有一个为空;而value可以多个为空)
HashTable扩容变为原来的2倍+1;HashMap变为2倍