一、背景:我们这个小项目,打算提高一下redis的利用率,顺便提高下性能,所以这就动手加入redis缓存。但是spring cache本身有些地方不太好用,比如以下这些就是这次的扩展的点
- 支持过期时间随意配置
- 计入延迟双删
- 批量删除性能优化
-
@Cacheable(sync = true) 锁的范围较大,细化锁粒度(不重要)
二、整体:基于Spring Cache扩展实现,
- 自定义注解,在AOP 中 ThreadLocal 传值
- 时间轮用于延迟任务
- 加入 ReentrantLock,根据缓存key分组使用,将来可以换成分布式锁
- 不影响原有功能
三、关键点
1、自定义注解
@CacheableExtension(
expireTime = 30000, // 30秒动态过期,这个只能配合 @Cacheable
delayDeleteTime = 5000, // 5秒延迟双删, 这个只能配合 @CacheEvict
lockWhenQuery = true //默认值改为false就是使用spring默认的锁
)
public Order getOrder(String orderId) { ... }
aop拦截该注解,
使用ThreadLocal维护CacheExtensionContext上下文
重写CustomRedisCache在put操作时动态读取过期时间,然后修改核心方法设置过期时间
this.cacheWriter.put(this.name, keyBytes, this.serializeCacheValue(cacheValue), Duration.ofMillis(expire));
2. 防缓存击穿机制 锁池实现代码片段:
Lock lock = CustomCacheLockPool.getLock(cacheName + cacheKey);
//设置锁过期时间,防止死锁
if (lock.tryLock(20, TimeUnit.SECONDS)) {
// 二次缓存检查
if (cacheHit) {
return cachedValue;
}
//数据库查询
queryDatafromDB();
}
sprirng 中 RedisCache 自带的加锁方法是如下:
private synchronized <T> T getSynchronized(Object key, Callable<T> valueLoader)
出于性能考虑,我细化了锁粒度,具体代码在后面
现在是锁粒度精确到"缓存名+缓存键"级别(或者直接以方法名为粒度)
加锁成功后做了二次缓存检查,是因为,多个线程同时请求进来,因为加了锁,
第一个获得锁的线程请求完数据库后,等待放入缓存后再释放锁,(这就是为什么释放锁没写在finally 中)
其他等待线程获得锁的时候,缓存中已经有数据,就没必要再去请求数据库,直接通过二次检查,从缓存中取值
3. 延迟批量删除组件
// 时间分桶算法(500ms精度)
long slot = (System.currentTimeMillis() + delayMs) / 500;
// 分桶存储待删除Key
PENDING_DELETES.compute(slot, (k, keys) -> {
if (keys == null) {
// 提交时间轮任务
TIMER.newTimeout(task, delayMs, TimeUnit.MILLISECONDS);
}
keys.add(key);
return keys;
});
- keys 是Set类型,去重防止key过多。
- 根据时间分桶,假设当前时间是 0, 那么200, 400, 800 毫秒后执行的任务,会被分成两个批次执行,200,400 一组,slot = 0, 800 一组, slot = 1。 还是为了防止数据太多
做了延迟双删也不能保证百分百数据一致,而且延迟删除的数据我也只是存在内存中。延迟删除只不过是再加一层保险,最终还是要靠业务代码自身保证,比如保存前乐观锁检测等
不同不一致就不一致吧,提交的时候查数据库进行验证
4:批量删除的问题
@CacheEvict(allEntries = true)
这是默认使用的是 keys 命令,
通过这行代码配置就能改成 通过 scan 分批次扫描再删除,
RedisCacheWriter cacheWriter = RedisCacheWriter.nonLockingRedisCacheWriter(redisConnectionFactory, BatchStrategies.scan(200));
但是我还是担心数据太多的情况,打算使用 lua 脚本 + unlink 再次提升性能,所以自定义注解中又加了一个参数,clearByLua, 当 clearByLua = true 时,启用lua脚本删除,
四:具体代码。发现bug请留言告诉我一下
CacheableExtension:自定义注解
@Target({ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
public @interface CacheableExtension {
/**
* 默认使用全局配置, 大于0时使用自定义配置
* @return
*/
long expireTime() default -1;
/**
* 当参数大于0时,开启双删,单位毫秒
* @return
*/
long delayDeleteTime() default -1;
/**
* 锁数据库, 默认开启, 防止大量请求直接访问数据库
* @return
*/
boolean lockWhenQuery() default true;
/**
* 是否使用lua脚本清理 @CacheEvict(allEntries = true)
* @return
*/
boolean clearByLua() default false;
}
CacheConfig: 配置类
@Configuration
@EnableCaching
public class CacheConfig {
@Autowired
private RedisCacheProperties cacheProperties;
@Bean
@Primary
public RedisCacheManager cacheManager(RedisConnectionFactory redisConnectionFactory) {
//指定使用scan方式删除数据,避免keys命令性能不好
RedisCacheWriter cacheWriter = RedisCacheWriter.nonLockingRedisCacheWriter(redisConnectionFactory, BatchStrategies.scan(200));
RedisCacheConfiguration cacheConfiguration = RedisCacheConfiguration.defaultCacheConfig()
.serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(new GenericFastJsonRedisSerializer()));
if (!cacheProperties.cacheNullValues) {
cacheConfiguration = cacheConfiguration.disableCachingNullValues();
}
if (cacheProperties.useKeyPrefix && StringUtils.isNotEmpty(cacheProperties.keyPrefix)) {
cacheConfiguration = cacheConfiguration.computePrefixWith(cacheName -> cacheProperties.keyPrefix + ":" + cacheName + ":");
} else {
cacheConfiguration = cacheConfiguration.computePrefixWith(cacheName -> cacheName + ":");
}
return new CustomRedisCacheManager(cacheWriter, cacheConfiguration, redisConnectionFactory);
}
@ConfigurationProperties(
prefix = "spring.cache.redis"
)
@Data
@Component
public static class RedisCacheProperties {
private Duration timeToLive;
private boolean cacheNullValues = true;
private String keyPrefix;
private boolean useKeyPrefix = true;
private boolean enableStatistics;
}
}
CacheExtensionAspect: aop
@Aspect
@Component
public class CacheExtensionAspect {
private static final int STRIPES = 15;
@Autowired
private CacheManager cacheManager;
private ParameterNameDiscoverer nameDiscoverer = new DefaultParameterNameDiscoverer();
private ExpressionParser parser = new SpelExpressionParser();
public static String createLockKey() {
//根据缓存名称和缓存key生成锁。 可以修改粒度,比如用方法名全路径作为key
return CacheExtensionContext.getCacheName() + CacheExtensionContext.getCacheKey();
}
@Order(Integer.MAX_VALUE)
@Around("@annotation(cacheableExtension)")
public Object aroundInner(ProceedingJoinPoint joinPoint, CacheableExtension cacheableExtension) throws Throwable {
long expire = cacheableExtension.expireTime();
if (expire > 0) {
CacheExtensionContext.setExpire(expire);
}
if (cacheableExtension.delayDeleteTime() > 0) {
CacheExtensionContext.setDelayDelete(cacheableExtension.delayDeleteTime());
}
CacheExtensionContext.setClearByLua(cacheableExtension.clearByLua());
Lock lock = null;
String cacheName = CacheExtensionContext.getCacheName();
if (null != cacheName && cacheableExtension.lockWhenQuery()) {
lock = CustomCacheLockPool.getLock(createLockKey());
if (lock.tryLock(20, TimeUnit.SECONDS)) {
CacheExtensionContext.setLock(lock);
Cache cache = cacheManager.getCache(CacheExtensionContext.getCacheName());
if (null != cache) {
// 阻塞的线程如果发现已经有缓存了,可以直接使用,不需要再次查询数据库
Cache.ValueWrapper valueWrapper = cache.get(CacheExtensionContext.getCacheKey());
if (null != valueWrapper) {
CacheExtensionContext.setLoadFromCache(true);
return valueWrapper.get();
}
}
}
}
return joinPoint.proceed();
}
}
CustomCacheLockPool: 锁
@Component
public class CustomCacheLockPool {
/**
* 锁对象, 可以修改成分布式锁,我这小项目,没必要
*/
private static final ConcurrentHashMap<String, ReentrantLock> lockMap = new ConcurrentHashMap<>();
public static Lock getLock(String lockKey) {
return lockMap.computeIfAbsent(lockKey, k -> new ReentrantLock());
}
public static void removeLock(String key) {
ReentrantLock lock = lockMap.remove(key);
}
}
CustomRedisCache: 重写关键方法
public class CustomRedisCache extends RedisCache {
private RedisCacheWriter cacheWriter;
private String name;
private RedisConnectionFactory factory;
private RedisCacheConfiguration config;
private static final int DEFAULT_SCAN_COUNT = 300;
private static final Logger logger = LoggerFactory.getLogger(CustomRedisCache.class);
private static final String LUA_SCRIPT =
"local cursor = '0'\n" +
"local totalDeleted = 0\n" +
"repeat\n" +
" local result = redis.call('SCAN', cursor, 'MATCH', KEYS[1], 'COUNT', tonumber(ARGV[1]))\n" +
" cursor = result[1]\n" +
" if #result[2] > 0 then\n" +
" totalDeleted = totalDeleted + redis.call('UNLINK', unpack(result[2]))\n" +
" end\n" +
"until cursor == '0'\n" +
"return totalDeleted";
protected CustomRedisCache(String name, RedisCacheWriter cacheWriter, RedisCacheConfiguration config, RedisConnectionFactory factory) {
super(name, cacheWriter, config);
this.cacheWriter = cacheWriter;
this.name = name;
this.factory = factory;
this.config = config;
}
@Override
public void put(Object key, Object value) {
//并发时,其他线程可能从缓存中获取的数据,没必要再存到缓存中
try {
if (!Boolean.TRUE.equals(CacheExtensionContext.getLoadFromCache())) {
Long expire = CacheExtensionContext.getExpire();
if (expire != null && expire > 0) {
Object cacheValue = this.preProcessCacheValue(value);
if (!this.isAllowNullValues() && cacheValue == null) {
throw new IllegalArgumentException(String.format("Cache '%s' does not allow 'null' values. Avoid storing null via '@Cacheable(unless=\"#result == null\")' or configure RedisCache to allow 'null' via RedisCacheConfiguration.", this.name));
} else {
byte[] keyBytes = this.serializeCacheKey(this.createCacheKey(key));
this.cacheWriter.put(this.name, keyBytes, this.serializeCacheValue(cacheValue), Duration.ofMillis(expire));
}
} else {
super.put(key, value);
}
}
} finally {
Lock lock = CacheExtensionContext.getLock();
//清理数据
CacheExtensionContext.removeAll();
if (null != lock) {
lock.unlock();
String lockKey = CacheExtensionAspect.createLockKey();
CustomCacheLockPool.removeLock(lockKey);
}
}
}
@Override
public void clear() {
//没有配置就用默认方法
if (!Boolean.TRUE.equals(CacheExtensionContext.getClearByLua())) {
super.clear();
return;
}
String pattern = createCacheKey("*");
String lockey = config.getKeyPrefixFor(name) + "lock";
boolean locked = false;
RedisConnection connection = factory.getConnection();
try {
//加锁 connection.setNX() 略
connection.setNX(lockey.getBytes(StandardCharsets.UTF_8), new byte[0]);
locked = true;
Long clearedCount = connection.eval(LUA_SCRIPT.getBytes(), ReturnType.INTEGER, 1, pattern.getBytes(), String.valueOf(DEFAULT_SCAN_COUNT).getBytes());
logger.info("Cleared {} keys matching pattern {}", clearedCount, pattern);
} catch (Exception e) {
logger.error("Error clearing keys with pattern {}: {}", pattern, e.getMessage(), e);
throw e;
} finally {
try {
if (locked) {
connection.del(lockey.getBytes(StandardCharsets.UTF_8));
}
} finally {
connection.close();
}
}
}
@Override
public void evict(Object key) {
super.evict(key);
if (CacheExtensionContext.getDelayDelete() != null && CacheExtensionContext.getDelayDelete() > 0) {
try (RedisConnection connection = factory.getConnection()) {
byte[] keyBytes = this.serializeCacheKey(this.createCacheKey(key));
String keyStr = new String(keyBytes);
RedisDelayDeleter.scheduleDelete(keyStr, CacheExtensionContext.getDelayDelete(), connection);
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
}
}
@Override
public ValueWrapper get(Object key) {
CacheExtensionContext.setCacheKey(String.valueOf(key));
CacheExtensionContext.setCacheName(this.name);
return super.get(key);
}
}
CustomRedisCacheManager:
public class CustomRedisCacheManager extends RedisCacheManager {
private final RedisCacheWriter cacheWriter;
private RedisConnectionFactory factory;
public CustomRedisCacheManager(RedisCacheWriter cacheWriter, RedisCacheConfiguration defaultCacheConfig, RedisConnectionFactory factory) {
super(cacheWriter, defaultCacheConfig);
this.cacheWriter = cacheWriter;
this.factory = factory;
}
@Override
protected RedisCache createRedisCache(String name, RedisCacheConfiguration cacheConfig) {
return new CustomRedisCache(name, cacheWriter, cacheConfig, factory);
}
}
RedisDelayDeleter: 延迟删除
@Component
public class RedisDelayDeleter {
private static final int BATCH_SIZE = 200;
private static final org.slf4j.Logger log = org.slf4j.LoggerFactory.getLogger(RedisDelayDeleter.class);
private static final long TICK_DURATION = 500;
private static final HashedWheelTimer TIMER = new HashedWheelTimer(
new CustomizableThreadFactory("RedisDelayDeleter"), // 自定义线程工厂
500, // 时间轮 tick 间隔(单位:毫秒)
TimeUnit.MILLISECONDS,
128 // 时间轮槽数量
);
// 批量删除的 Key 缓存(线程安全)
private static final ConcurrentHashMap<Long, Set<String>> PENDING_DELETES = new ConcurrentHashMap<>();
/**
* 延迟删除, 多次提交会愤怒组去重,只要延时删除不是跨很大的时间段,不会占用太多内存
* @param key
* @param delayMs
*/
public static void scheduleDelete(String key, long delayMs, RedisConnection connection) {
if (delayMs <= 0) {
return;
}
if (delayMs > 24 * 3600 * 1000) {
throw new IllegalArgumentException("Delay must be less than 24 hours");
}
long triggerTime = System.currentTimeMillis() + delayMs;
// 分桶(可调整时间窗口)
long slot = triggerTime / TICK_DURATION;
// 将 Key 按触发时间分桶
PENDING_DELETES.compute(slot, (k, keys) -> {
//去重,已存在的key 不重复添加
if (keys == null) {
keys = ConcurrentHashMap.newKeySet();
// 提交时间轮任务
TIMER.newTimeout(new TimerTask() {
@Override
public void run(Timeout timeout) {
try {
Set<String> keys = PENDING_DELETES.remove(slot);
if (keys != null && !keys.isEmpty()) {
deleteKeysInBatches(keys, connection);
log.info("延迟删除任务触发:slot={}, keys={}", slot, keys.size());
}
} catch (Exception e) {
log.error("延迟删除失败:" + e.getMessage(), e);
}
}
}, delayMs, TimeUnit.MILLISECONDS);
}
keys.add(key);
return keys;
});
}
/**
* 批量删除 Key
* @param keys
*/
public static void deleteKeysInBatches(Set<String> keys, RedisConnection connection) {
List<String> keysList = new ArrayList<>(keys);
for (int i = 0; i < keysList.size(); i += BATCH_SIZE) {
List<String> batch = keysList.subList(i, Math.min(i + BATCH_SIZE, keysList.size()));
// 将 String 类型的键转换为 byte[] 数组
byte[][] keyBytes = batch.stream()
.map(key -> key.getBytes(StandardCharsets.UTF_8))
.toArray(byte[][]::new);
connection.del(keyBytes);
}
}
}
CacheExtensionContext: 传参数工具类,可以简化一下,写的时候不停的加参数就成这样了
public class CacheExtensionContext {
private static final ThreadLocal<Long> expireHolder = new ThreadLocal<>();
private static final ThreadLocal<Long> delayDeleteHolder = new ThreadLocal<>();
private static final ThreadLocal<String> cacheKeyHolder = new ThreadLocal<>();
private static final ThreadLocal<String> cacheNameHolder = new ThreadLocal<>();
private static final ThreadLocal<Lock> lockHolder = new ThreadLocal<>();
private static final ThreadLocal<Boolean> loadFromCache = new ThreadLocal<>();
private static final ThreadLocal<Boolean> clearByLua = new ThreadLocal<>();
public static void setExpire(Long expire) {
expireHolder.set(expire);
}
public static Long getExpire() {
return expireHolder.get();
}
public static void removeExpire() {
expireHolder.remove();
}
public static void setDelayDelete(Long delayDelete) {
delayDeleteHolder.set(delayDelete);
}
public static Long getDelayDelete() {
return delayDeleteHolder.get();
}
public static void removeDelayDelete() {
delayDeleteHolder.remove();
}
public static void setCacheKey(String cacheKey) {
cacheKeyHolder.set(cacheKey);
}
public static String getCacheKey() {
return cacheKeyHolder.get();
}
public static void removeCacheKey() {
cacheKeyHolder.remove();
}
public static void setCacheName(String cacheName) {
cacheNameHolder.set(cacheName);
}
public static String getCacheName() {
return cacheNameHolder.get();
}
public static void removeCacheName() {
cacheNameHolder.remove();
}
public static void setLock(Lock lock) {
lockHolder.set(lock);
}
public static Lock getLock() {
return lockHolder.get();
}
public static void removeLock() {
lockHolder.remove();
}
public static void setLoadFromCache(Boolean loadFromCache) {
CacheExtensionContext.loadFromCache.set(loadFromCache);
}
public static Boolean getLoadFromCache() {
return loadFromCache.get();
}
public static void removeLoadFromCache() {
loadFromCache.remove();
}
public static void setClearByLua(Boolean clearByLua) {
CacheExtensionContext.clearByLua.set(clearByLua);
}
public static Boolean getClearByLua() {
return clearByLua.get();
}
public static void removeClearByLua() {
clearByLua.remove();
}
public static void removeAll() {
removeExpire();
removeDelayDelete();
removeCacheKey();
removeCacheName();
removeLock();
removeLoadFromCache();
removeClearByLua();
}
}
祝语
最后祝大家永不失业。最近一年内两次再失业边缘游荡,我们部门程序员少了70%以上了,祝我和大家都能苟到最后,苟到环境好i转