LRU缓存淘汰算法

如何实现LRU缓存淘汰算法?
我的思路是这样的:我们维护⼀个有序单链表,越靠近链表尾部的结点是越早之前访问的。当有⼀ 个新的数据被访问时,我们从链表头开始顺序遍历链表。

  1. 如果此数据之前已经被缓存在链表中了,我们遍历得到这个数据对应的结点,并将其从原来的位 置删除,然后再插⼊到链表的头部。

  2. 如果此数据没有在缓存链表中,⼜可以分为两种情况:

    1). 如果此时缓存未满,则将此结点直接插⼊到链表的头部;
    2). 如果此时缓存已满,则链表尾结点删除,将新的数据结点插⼊链表的头部。

    这样我们就⽤链表实现了⼀个LRU缓存,是不是很简单? 现在我们来看下缓存访问的时间复杂度是多少。因为不管缓存有没有满,我们都需要遍历⼀遍链表,所以这种基于链表的实现思路,缓存访问的时间复杂度为O(n)。 实际上,我们可以继续优化这个实现思路,⽐如引⼊散列表(Hash table)来记录每个数据的位 置,将缓存访问的时间复杂度降到O(1)。

下面是双向链表的实现

/* We can use Java inbuilt Deque as a double ended queue to store the cache keys, with the descending time of reference from front to back and a set container to check presence of a key. But remove a key from the Deque using remove(), it takes O(N) time. This can be optimized by storing a reference (iterator) to each key in a hash map. */
import java.util.Deque;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.Iterator;

public class LRUCache {

	// store keys of cache
	private Deque<Integer> doublyQueue;

	// store references of key in cache
	private HashSet<Integer> hashSet;

	// maximum capacity of cache
	private final int CACHE_SIZE;

	LRUCache(int capacity) {
		doublyQueue = new LinkedList<>();
		hashSet = new HashSet<>();
		CACHE_SIZE = capacity;
	}

	/* Refer the page within the LRU cache */
	public void refer(int page) {
		if (!hashSet.contains(page)) {
			if (doublyQueue.size() == CACHE_SIZE) {
				int last = doublyQueue.removeLast();
				hashSet.remove(last);
			}
		}
		else {/* The found page may not be always the last element, even if it's an
			intermediate element that needs to be removed and added to the start
			of the Queue */
			doublyQueue.remove(page);
		}
		doublyQueue.push(page);
		hashSet.add(page);
	}

	// display contents of cache
	public void display() {
		Iterator<Integer> iterator = doublyQueue.iterator();
		while (iterator .hasNext()) {
			System.out.print(iterator .next() + " ");
		}
	}

	public static void main(String[] args) {
		LRUCache cache = new LRUCache(4);
		cache.refer(1);
		cache.refer(2);
		cache.refer(3);
		cache.refer(1);
		cache.refer(4);
		cache.refer(5);
		cache.refer(2);
		cache.refer(2);
		cache.refer(1);
		cache.display();
	}
}

LRU算法的LinkedHashSet实现:

// Java program to implement LRU cache
// using LinkedHashSet
import java.util.*;

class LRUCache {

	Set<Integer> cache;
	int capacity;

	public LRUCache(int capacity)
	{
		this.cache = new LinkedHashSet<Integer>(capacity);
		this.capacity = capacity;
	}

	// This function returns false if key is not
	// present in cache. Else it moves the key to
	// front by first removing it and then adding
	// it, and returns true.
	public boolean get(int key)
	{
		if (!cache.contains(key))
			return false;
		cache.remove(key);
		cache.add(key);
		return true;
	}

	/* Refers key x with in the LRU cache */
	public void refer(int key)
	{	
		if (get(key) == false)
		put(key);
	}

	// displays contents of cache in Reverse Order
	public void display()
	{
	LinkedList<Integer> list = new LinkedList<>(cache);
	
	// The descendingIterator() method of java.util.LinkedList
	// class is used to return an iterator over the elements
	// in this LinkedList in reverse sequential order
	Iterator<Integer> iterator = list.descendingIterator();
	
	while (iterator .hasNext())
			System.out.print(iterator .next() + " ");
	}
	
	public void put(int key)
	{
		
	if (cache.size() == capacity) {
			int firstKey = cache.iterator().next();
			cache.remove(firstKey);
		}

		cache.add(key);
	}
	
	public static void main(String[] args)
	{
		LRUCache lruca = new LRUCache(4);
		lruca .refer(1);
		lruca .refer(2);
		lruca .refer(3);
		lruca .refer(1);
		lruca .refer(4);
		lruca .refer(5);
		lruca .display();
	}
}

`
Reference:https://www.geeksforgeeks.org/lru-cache-implementation/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值