redis 排查永久不过期key

排查永久不过期key, lua 脚本如下

cursor=0
while true; do
  # 获取一批键
  result=$(redis-cli SCAN $cursor)
  cursor=$(echo $result | awk '{print $1}')
  keys=$(echo $result | awk '{$1=""; print $0}' | xargs)

  # 检查每个键的 TTL
  for key in $keys; do
    ttl=$(redis-cli TTL "$key")
    if [ "$ttl" -eq -1 ]; then
      echo "Key '$key' 永不过期"
    fi
  done

  # 如果游标为 0,说明扫描完毕
  if [ "$cursor" -eq 0 ]; then
    break
  fi
done

命令

redis-cli EVAL "local cursor = '0' local keys = {} repeat local result = redis.call('SCAN', cursor) cursor = result[1] local batch_keys = result[2] for i = 1, #batch_keys do local ttl = redis.call('TTL', batch_keys[i]) if ttl == -1 then table.insert(keys, batch_keys[i]) end end until cursor == '0' return keys" 0
redis 查询

MEMORY STATS 查看内存使用情况

内存统计分析 (Memory Statistics Analysis)
基本内存使用情况 (Basic Memory Usage)

峰值分配内存 (peak.allocated): 1,025,059,944 字节 (约977MB)
The peak memory allocated by Redis
当前分配内存 (total.allocated): 970,568,656 字节 (约925MB)
Current total memory allocated by Redis
启动时分配内存 (startup.allocated): 4,430,416 字节 (约4.2MB)
Memory allocated at startup

特定功能内存使用 (Specific Feature Memory Usage)

复制缓冲区 (replication.backlog): 33,554,432 字节 (32MB)
Memory used by replication backlog
从节点客户端 (clients.slaves): 20,512 字节
Memory used by replica clients
普通客户端 (clients.normal): 1,989,696 字节 (约1.9MB)
Memory used by normal clients
AOF缓冲区 (aof.buffer): 512 字节
Memory used by AOF buffer
Lua缓存 (lua.caches): 1,650,680 字节 (约1.6MB)
Memory used by Lua scripts

数据库统计 (Database Statistics)

键总数 (keys.count): 209,902 个
Total number of keys
每个键平均字节数 (keys.bytes-per-key): 4,602 字节
Average bytes per key
数据集大小 (dataset.bytes): 911,035,936 字节 (约869MB)
Total size of the dataset in memory
数据集占总内存百分比 (dataset.percentage): 94.30%
Percentage of total memory used by the dataset

哈希表开销 (Hash Table Overhead)

主哈希表开销 (overhead.hashtable.main): 12,172,448 字节 (约11.6MB)
Memory overhead for main dictionary hash table
过期哈希表开销 (overhead.hashtable.expires): 5,714,024 字节 (约5.4MB)
Memory overhead for expires dictionary hash table
总开销 (overhead.total): 59,532,720 字节 (约56.8MB)
Total memory overhead

内存碎片情况 (Memory Fragmentation)

内存碎片率 (fragmentation): 1.086 (约8.6%的碎片)
Overall memory fragmentation ratio
碎片字节数 (fragmentation.bytes): 83,484,200 字节 (约79.6MB)
Total bytes of fragmentation
分配器碎片率 (allocator-fragmentation.ratio): 1.0364 (约3.6%的分配器碎片)
Memory allocator fragmentation ratio
RSS开销率 (rss-overhead.ratio): 1.0285 (约2.9%的RSS开销)
Ratio of RSS overhead

分析结论 (Analysis Conclusion)

内存使用效率良好 (Good memory utilization): 数据集占总内存的94.30%,表明大部分内存都用于实际存储数据,效率较高。
Dataset uses 94.30% of total memory, indicating high efficiency with most memory used for actual data storage.
碎片率正常 (Normal fragmentation): 总碎片率为1.086,意味着有约8.6%的内存碎片,这个值在正常范围内。一般来说,碎片率低于1.5被认为是健康的。
Total fragmentation ratio is 1.086, meaning about 8.6% memory fragmentation, which is within normal range. Generally, a ratio below 1.5 is considered healthy.
键数量较多 (High key count): 有209,902个键,每个键平均占用4,602字节。
There are 209,902 keys with an average of 4,602 bytes per key.
没有进行rehashing (No rehashing in progress): db.dict.rehashing.count为0,表明当前没有进行字典rehash操作。
The db.dict.rehashing.count is 0, indicating no dictionary rehashing operations are currently in progress.
内存接近峰值 (Memory close to peak): 当前内存使用量(970MB)接近峰值(977MB)的94.68%,说明系统在历史上没有经历过明显更高的内存使用。
Current memory usage (970MB) is about 94.68% of the peak (977MB), indicating the system hasn't experienced significantly higher memory usage historically.

排查redis key占用内存


-- 统计Redis库中按多级前缀的内存占用,并按内存从大到小排序
-- 第一个参数:前缀模式,例如"*" "go:*" 或 "go:one:*" 等任意级别
-- 第二个参数:要显示的前缀数量,默认为20
-- 使用SCAN命令分批处理,避免对线上服务造成卡顿

local prefix_pattern = ARGV[1] or "*"  -- 输入的前缀模式
local limit = tonumber(ARGV[2]) or 20  -- 要显示的前缀数量,默认为20
local stripped_prefix = string.gsub(prefix_pattern, ":*$", "")  -- 移除结尾的":*"

-- 计算当前前缀的级别(冒号的数量)
local current_level = 0
for _ in string.gmatch(stripped_prefix, ":") do
    current_level = current_level + 1
end

local result = {}
local prefix_stats = {}  -- 存储各前缀的统计信息
local cursor = "0"
local batch_size = 500  -- 每次扫描的键数量,避免过大造成阻塞

-- 使用SCAN命令遍历所有匹配的key
repeat
    -- 使用SCAN命令获取一批key
    local scan_result = redis.call("SCAN", cursor, "MATCH", prefix_pattern, "COUNT", batch_size)
    cursor = scan_result[1]
    local keys = scan_result[2]
    
    -- 处理这批key
    for i, key in ipairs(keys) do
        -- 计算这个key的级别前缀
        local parts = {}
        local part_count = 0
        
        -- 将key按冒号分割成多个部分
        for part in string.gmatch(key, "[^:]+") do
            table.insert(parts, part)
            part_count = part_count + 1
        end
        
        -- 只有当key的部分数量大于当前级别+1时才能提取下一级
        if part_count > current_level + 1 then
            local next_level_prefix = ""
            
            -- 构建到下一级的前缀
            for j = 1, current_level + 1 do
                next_level_prefix = next_level_prefix .. parts[j]
                if j < current_level + 1 then
                    next_level_prefix = next_level_prefix .. ":"
                end
            end
            
            -- 检查是否有更深级别
            local has_deeper_levels = (part_count > current_level + 2)
            
            -- 添加通配符(如果有更深级别)
            if has_deeper_levels then
                next_level_prefix = next_level_prefix .. ":*"
            end
            
            -- 获取key的内存占用
            local memory = redis.call("MEMORY", "USAGE", key)
            
            -- 初始化前缀统计(如果不存在)
            if not prefix_stats[next_level_prefix] then
                prefix_stats[next_level_prefix] = {
                    count = 0,
                    memory = 0
                }
            end
            
            -- 更新前缀统计
            prefix_stats[next_level_prefix].count = prefix_stats[next_level_prefix].count + 1
            prefix_stats[next_level_prefix].memory = prefix_stats[next_level_prefix].memory + memory
        end
    end
until cursor == "0"  -- 当cursor为0时,表示遍历完成

-- 将前缀统计转换为数组以便排序
local prefix_array = {}
for prefix, stats in pairs(prefix_stats) do
    table.insert(prefix_array, {
        prefix = prefix,
        count = stats.count,
        memory = stats.memory,
        memory_human = string.format("%.2f MB", stats.memory / (1024 * 1024))
    })
end

-- 按内存占用对前缀进行排序(降序)
table.sort(prefix_array, function(a, b) return a.memory > b.memory end)

-- 只保留前N个,N由参数指定,默认为20
if #prefix_array > limit then
    local top_prefixes = {}
    for i = 1, limit do
        top_prefixes[i] = prefix_array[i]
    end
    prefix_array = top_prefixes
end

-- 格式化为需要的输出格式
local formatted_result = {}
for _, prefix_data in ipairs(prefix_array) do
    table.insert(formatted_result, prefix_data.prefix)  -- 前缀名
    table.insert(formatted_result, {
        prefix_data.count,  -- 键数量
        prefix_data.memory, -- 内存占用(字节)
        prefix_data.memory_human -- 内存占用(MB格式)
    })
end

return formatted_result

执行

EVAL "local prefix_pattern = ARGV[1] or '*'; local limit = tonumber(ARGV[2]) or 20; local stripped_prefix = string.gsub(prefix_pattern, ':*$', ''); local current_level = 0; for _ in string.gmatch(stripped_prefix, ':') do current_level = current_level + 1; end; local result = {}; local prefix_stats = {}; local cursor = '0'; local batch_size = 500; repeat local scan_result = redis.call('SCAN', cursor, 'MATCH', prefix_pattern, 'COUNT', batch_size); cursor = scan_result[1]; local keys = scan_result[2]; for i, key in ipairs(keys) do local parts = {}; local part_count = 0; for part in string.gmatch(key, '[^:]+') do table.insert(parts, part); part_count = part_count + 1; end; if part_count > current_level + 1 then local next_level_prefix = ''; for j = 1, current_level + 1 do next_level_prefix = next_level_prefix .. parts[j]; if j < current_level + 1 then next_level_prefix = next_level_prefix .. ':'; end; end; local has_deeper_levels = (part_count > current_level + 2); if has_deeper_levels then next_level_prefix = next_level_prefix .. ':*'; end; local memory = redis.call('MEMORY', 'USAGE', key); if not prefix_stats[next_level_prefix] then prefix_stats[next_level_prefix] = {count = 0, memory = 0}; end; prefix_stats[next_level_prefix].count = prefix_stats[next_level_prefix].count + 1; prefix_stats[next_level_prefix].memory = prefix_stats[next_level_prefix].memory + memory; end; end; until cursor == '0'; local prefix_array = {}; for prefix, stats in pairs(prefix_stats) do table.insert(prefix_array, {prefix = prefix, count = stats.count, memory = stats.memory, memory_human = string.format('%.2f MB', stats.memory / (1024 * 1024))}); end; table.sort(prefix_array, function(a, b) return a.memory > b.memory end); if #prefix_array > limit then local top_prefixes = {}; for i = 1, limit do top_prefixes[i] = prefix_array[i]; end; prefix_array = top_prefixes; end; local formatted_result = {}; for _, prefix_data in ipairs(prefix_array) do table.insert(formatted_result, prefix_data.prefix); table.insert(formatted_result, {prefix_data.count, prefix_data.memory, prefix_data.memory_human}); end; return formatted_result" 0 "go:*" 10
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

gitxuzan_

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值