Design and implement a data structure for Least Recently Used (LRU) cache. It should support the following operations: get
and set
.
get(key)
-
Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1.
set(key, value)
- Set or insert the value if the key is not already present.
When the cache reached its capacity, it should invalidate the least recently used item before inserting a new item.
不论是get还是set之后都要把当前key放在最前面,第一次想用链表实现,TreeMap+LinkedList,get和已存在的set时,每次删除key,然后再把key插入到链表首,原来不存在的set时直接插入链表首,主要还是链表遍历查找key耗时,TLE了
private TreeMap<Integer, Integer> treemap;
private LinkedList<Integer> linkedlist;
private int capacity;
public LRUCache(int capacity)
{
this.capacity=capacity;
linkedlist=new LinkedList<>();
treemap=new TreeMap<>();
}
public int get(int key)
{
if(!treemap.containsKey(key))
return -1;
int retval=treemap.get(key);
linkedlist.removeFirstOccurrence(key);
linkedlist.addFirst(key);
return retval;
}
public void set(int key, int value)
{
if(!treemap.containsKey(key))
{
treemap.put(key, value);
linkedlist.addFirst(key);
if(linkedlist.size()>capacity)
{
int removekey=linkedlist.removeLast();
treemap.remove(removekey);
}
}
else
{
treemap.put(key, value);
linkedlist.removeFirstOccurrence(key);
linkedlist.addFirst(key);
}
}
-------------------------------------------------------------------------------------------------------
遍历链表查找key耗费大量时间,改用indexminPQ,设置计数值,每次操作(包括get、set)将当前计数值和key值用indexminPQ关联,每次达到容量阈值之后执行delmin()操作即可。
注:由于indexminpq的实现在内部以数组的方式建立了value->key的反向索引,所以最大容量要超过key的最大值,一般的应用场景是key是连续密集分布的,或者就是“计数型”的,如果key值可能很大又相对稀疏,内部的反向索引数组应该改用hashmap实现。
private TreeMap<Integer, Integer> treemap;
IndexMinPQ2<Integer> indexminpq;
private int capacity;
int cnt=0;
public LRUCache(int capacity)
{
this.capacity=capacity;
indexminpq=new IndexMinPQ2<>(capacity+50002);
treemap=new TreeMap<>();
}
public int get(int key)
{
if(!treemap.containsKey(key))
return -1;
int retval=treemap.get(key);
indexminpq.change(key, cnt);
cnt++;
return retval;
}
public void set(int key, int value)
{
if(!treemap.containsKey(key))
{
treemap.put(key, value);
indexminpq.insert(key, cnt);
if(indexminpq.size()>capacity)
{
int removekey=indexminpq.delMin();
treemap.remove(removekey);
}
}
else
{
treemap.put(key, value);
indexminpq.change(key, cnt);
}
cnt++;
}
}
class IndexMinPQ2<Key extends Comparable<Key>>
{
private int N;
private int[] pq;
private int[] qp;
private Key[] keys;
@SuppressWarnings("unchecked")
public IndexMinPQ2(int maxN)
{
keys = (Key[]) new Comparable[maxN + 1];
pq = new int[maxN + 1];
qp = new int[maxN + 1];
for (int i = 0; i <= maxN; i++)
qp[i] = -1;
// TODO Auto-generated constructor stub
}
public boolean isEmpty()
{
return N == 0;
}
public boolean contains(int k)
{
return qp[k] != -1;
}
public void insert(int k, Key key)
{
N++;
pq[N] = k;
qp[k] = N;
keys[k] = key;
swim(N);
}
public Key min()
{
return keys[pq[1]];
}
private void swim(int k)
{
while (k > 1 && keys[pq[k / 2]].compareTo(keys[pq[k]]) > 0)
{
exch(k, k / 2);
k = k / 2;
}
}
private void sink(int k)
{
while (k * 2 <= N)
{
int j = k * 2;
if (j + 1 <= N && keys[pq[j]].compareTo(keys[pq[j + 1]]) > 0)
j++;
if (keys[pq[k]].compareTo(keys[pq[j]]) < 0)
break;
exch(k, j);
k = j;
}
}
private void exch(int i, int j)
{
int temp = pq[i];
pq[i] = pq[j];
pq[j] = temp;
qp[pq[i]] = i;
qp[pq[j]] = j;
}
public int delMin()
{
int indexOfMin = pq[1];
exch(1, N--);
sink(1);
keys[pq[N + 1]] = null;
qp[pq[N + 1]] = -1;
return indexOfMin;
}
public void change(int k, Key key)
{
keys[k] = key;
swim(qp[k]);
sink(qp[k]);
}
public void delete(int k)
{
int index = qp[k];
exch(index, N--);
sink(index);
swim(index);
keys[k] = null;
qp[k] = -1;
}
public int size()
{
return N;
}
update 2016.07.23
参考了discuss里面的交流,可以用双向链表+hashmap实现,双向链表里面每次元素的删除、插入操作时间复杂度O(1),hashmap查询的时间复杂度也为O(1),
库函数里的LinkedList只能整体操作,对单个元素进行操作需要遍历,这相当耗费时间,而通过自己实现双向链表、建立key到双向链表节点的映射关系,使用hashmap来查找,使用链表方法插入删除,能够结合两者的优势,最大地优化时间复杂度。
public class LRUCache
{
dNode head,tail;
HashMap<Integer, dNode> hashmap;
int capacity;
public LRUCache(int capacity)
{
hashmap=new HashMap<>(capacity);
this.capacity=capacity;
head=new dNode(null,-1);
tail=new dNode(null,-1);
head.next=tail;
}
public int get(int key)
{
dNode dn=hashmap.get(key);
if(dn==null)
return -1;
dNode.delete(dn);
dNode.addfirst(dn, head);
return dn.val;
}
public void set(int key, int value)
{
dNode dnode;
if(hashmap.containsKey(key))
{
dnode=hashmap.get(key);
dnode.val=value;
dNode.delete(dnode);
}
else {
dnode=new dNode(key,value);
}
dNode.addfirst(dnode, head);
hashmap.put(key, dnode);
if(hashmap.size()>capacity)
hashmap.remove(dNode.delete(tail.prev).key);
}
}
class dNode
{
int val;
Integer key;
dNode prev,next;
public dNode(Integer key,int val)
{
this.val=val;
this.key=key;
// TODO Auto-generated constructor stub
}
public static dNode delete(dNode dn)
{
dNode pre=dn.prev;
dNode next=dn.next;
pre.next=next;
next.prev=pre;
dn.prev=null;
dn.next=null;
return dn;
}
public static void addfirst(dNode node,dNode head)
{
dNode next=head.next;
node.prev=head;
node.next=next;
next.prev=node;
head.next=node;
}
@Override
public String toString()
{
// TODO Auto-generated method stub
return String.valueOf(val);
}
}