Spring AOP + RocketMQ 实现企业级操作日志异步采集(实战全流程)
📌 项目背景
在企业级微服务架构中,记录操作日志是一项刚需。传统方式常使用数据库直接写入或通过 Feign 调用日志微服务,但这样存在耦合高、主流程阻塞、扩展性差等问题。
为此,我们将使用:
- Spring AOP 实现非侵入式日志采集
- RocketMQ 实现异步解耦投递
- Redis 实现消息幂等控制
- DLQ 死信队列保障日志消息最终可达
🧱 技术选型
模块 | 技术 |
---|---|
日志采集 | Spring AOP + 自定义注解 |
消息中间件 | RocketMQ + Spring Cloud Stream |
幂等控制 | Redis |
安全框架 | Sa-Token |
监控 & 补偿 | RocketMQ DLQ、自定义消费处理 |
🚦 实现目标
- 通过
@Log
注解拦截业务方法 - 捕获操作人、IP、请求参数、响应结果、执行耗时等日志信息
- 使用 RocketMQ 异步投递日志消息
- 使用 Redis 做幂等处理,防止重复消费
- 消费失败自动重试,最终由 DLQ 消费者处理
📦 Maven 依赖
确保主业务系统和日志服务都引入 RocketMQ 依赖:
<!-- RocketMQ Stream -->
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-stream-rocketmq</artifactId>
</dependency>
1️⃣ 日志注解定义
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface Log {
String title() default "";
BusinessType businessType() default BusinessType.OTHER;
OperatorType operatorType() default OperatorType.MANAGE;
boolean isSaveRequestData() default true;
boolean isSaveResponseData() default true;
String[] excludeParamNames() default {};
}
2️⃣ AOP 切面实现(LogAspect)
- 使用
@Before/@AfterReturning/@AfterThrowing
统一处理日志 - 日志采集后调用
logMqService.saveSysLog()
异步发送到 MQ - 使用
ThreadLocal
计算执行耗时
@Aspect
@Component
public class LogAspect {
private static final ThreadLocal<Long> TIME_THREADLOCAL = new NamedThreadLocal<>("Cost Time");
private static final String[] EXCLUDE_PROPERTIES = {"password", "oldPassword", "newPassword", "confirmPassword", "credentials"};
@Resource private HttpServletRequest request;
@Resource private LogMqService logMqService;
@Before("@annotation(controllerLog)")
public void boBefore(JoinPoint joinPoint, Log controllerLog) {
TIME_THREADLOCAL.set(System.currentTimeMillis());
}
@AfterReturning(pointcut = "@annotation(controllerLog)", returning = "jsonResult")
public void doAfterReturning(JoinPoint joinPoint, Log controllerLog, Object jsonResult) {
handleLog(joinPoint, controllerLog, null, jsonResult);
}
@AfterThrowing(value = "@annotation(controllerLog)", throwing = "e")
public void doAfterThrowing(JoinPoint joinPoint, Log controllerLog, Exception e) {
handleLog(joinPoint, controllerLog, e, null);
}
protected void handleLog(final JoinPoint joinPoint, Log controllerLog, final Exception e, Object jsonResult) {
try {
SysOperLogDTO operLog = new SysOperLogDTO();
operLog.setStatus(BusinessStatus.SUCCESS.ordinal());
operLog.setOperIp(IPUtil.getIpAddr(request));
operLog.setOperUrl(SaHolder.getRequest().getUrl());
operLog.setOperName((String) StpUtil.getLoginId());
if (e != null) {
operLog.setStatus(BusinessStatus.FAIL.ordinal());
operLog.setErrorMsg(StringUtils.substring(e.getMessage(), 0, 2000));
}
operLog.setMethod(joinPoint.getTarget().getClass().getName() + "." + joinPoint.getSignature().getName() + "()");
operLog.setRequestMethod(SaHolder.getRequest().getMethod());
operLog.setOperTime(new DateTime(LocalDateTime.now()));
operLog.setCostTime(System.currentTimeMillis() - TIME_THREADLOCAL.get());
getControllerMethodDescription(joinPoint, controllerLog, operLog, jsonResult);
logMqService.saveSysLog(operLog);
} finally {
TIME_THREADLOCAL.remove();
}
}
private void getControllerMethodDescription(JoinPoint joinPoint, Log log, SysOperLogDTO operLog, Object jsonResult) {
operLog.setBusinessType(log.businessType().ordinal());
operLog.setTitle(log.title());
operLog.setOperatorType(log.operatorType().ordinal());
if (log.isSaveRequestData()) {
setRequestValue(joinPoint, operLog, log.excludeParamNames());
}
if (log.isSaveResponseData() && jsonResult != null) {
operLog.setJsonResult(StringUtils.substring(JSON.toJSONString(jsonResult), 0, 2000));
}
}
private void setRequestValue(JoinPoint joinPoint, SysOperLogDTO operLog, String[] excludeParamNames) {
String method = operLog.getRequestMethod();
Map<String, String> paramsMap = SaHolder.getRequest().getParamMap();
if (paramsMap.isEmpty() && (HttpMethod.PUT.name().equals(method) || HttpMethod.POST.name().equals(method))) {
operLog.setOperParam(StringUtils.substring(argsArrayToString(joinPoint.getArgs(), excludeParamNames), 0, 2000));
} else {
operLog.setOperParam(StringUtils.substring(JSON.toJSONString(paramsMap, excludePropertyPreFilter(excludeParamNames)), 0, 2000));
}
}
private String argsArrayToString(Object[] paramsArray, String[] excludeParamNames) {
StringBuilder params = new StringBuilder();
for (Object o : paramsArray) {
if (o != null && !isFilterObject(o)) {
try {
params.append(JSON.toJSONString(o, excludePropertyPreFilter(excludeParamNames))).append(" ");
} catch (Exception ignored) {}
}
}
return params.toString().trim();
}
private PropertyPreExcludeFilter excludePropertyPreFilter(String[] excludeParamNames) {
return new PropertyPreExcludeFilter().addExcludes(ArrayUtils.addAll(EXCLUDE_PROPERTIES, excludeParamNames));
}
private boolean isFilterObject(Object o) {
if (o instanceof MultipartFile || o instanceof HttpServletRequest || o instanceof HttpServletResponse || o instanceof BindingResult) return true;
if (o.getClass().isArray()) return MultipartFile.class.isAssignableFrom(o.getClass().getComponentType());
if (o instanceof Collection<?>) return ((Collection<?>) o).stream().anyMatch(item -> item instanceof MultipartFile);
if (o instanceof Map<?, ?>) return ((Map<?, ?>) o).values().stream().anyMatch(value -> value instanceof MultipartFile);
return false;
}
}
3️⃣ 日志 DTO
@Data
public class SysOperLogDTO implements Serializable {
private String operName;
private String operIp;
private String operUrl;
private String method;
private String requestMethod;
private Integer status;
private String errorMsg;
private String operParam;
private String jsonResult;
private Date operTime;
private Long costTime;
private String msgId;
}
4️⃣ RocketMQ 消息发送封装
Producer 通道定义
public interface LogProducerSource {
@Output("logOutput")
MessageChannel logOutput();
}
Producer 实现
@EnableBinding(LogProducerSource.class)
@Component
public class LogMqProducer {
@Resource private LogProducerSource source;
public void sendLogMessage(SysOperLogDTO dto) {
dto.setMsgId(UUID.randomUUID().toString());
Message<SysOperLogDTO> msg = MessageBuilder.withPayload(dto)
.setHeader("rocketmq_KEYS", dto.getMsgId())
.build();
source.logOutput().send(msg);
}
}
Service 封装
@Service
public class LogMqService {
@Resource private LogMqProducer producer;
public void saveSysLog(SysOperLogDTO dto) {
try {
producer.sendLogMessage(dto);
} catch (Exception e) {
log.warn("日志投递失败", e);
}
}
}
5️⃣ RocketMQ 消费端实现
消费通道定义
public interface LogSink {
@Input("logInput")
SubscribableChannel logInput();
@Input("dlqInput")
SubscribableChannel dlqInput();
}
消费者(幂等控制 + 入库)
@EnableBinding(LogSink.class)
@Component
@Slf4j
public class LogConsumer {
@Resource private OperLogService operLogService;
@Resource private StringRedisTemplate redisTemplate;
private static final String LOG_IDEMP_KEY_PREFIX = "log:msgid:";
@StreamListener("logInput")
public void receive(SysOperLogDTO dto) {
String redisKey = LOG_IDEMP_KEY_PREFIX + dto.getMsgId();
Boolean success = redisTemplate.opsForValue().setIfAbsent(redisKey, "1", Duration.ofMinutes(5));
if (Boolean.FALSE.equals(success)) {
log.warn("重复日志跳过: {}", dto.getMsgId()); return;
}
try {
operLogService.saveLog(dto);
} catch (Exception e) {
throw e; // 触发 RocketMQ 重试
}
}
@StreamListener("dlqInput")
public void handleDlq(SysOperLogDTO dto) {
log.error("死信日志处理:msgId={} 内容={},建议告警", dto.getMsgId(), dto);
// TODO: 可入库/发邮件/钉钉等
}
}
6️⃣ Spring 配置(application.yml)
spring:
cloud:
stream:
rocketmq:
binder:
name-server: 127.0.0.1:9876
bindings:
logOutput:
destination: sys-log-topic
logInput:
destination: sys-log-topic
group: log-consumer-group
dlqInput:
destination: %DLQ%log-consumer-group
group: log-dlq-consumer
7️⃣ OperLogService 示例
public interface OperLogService {
void saveLog(SysOperLogDTO dto);
}
@Service
public class OperLogServiceImpl implements OperLogService {
@Resource private OperLogMapper mapper;
public void saveLog(SysOperLogDTO dto) {
SysOperLog entity = new SysOperLog();
BeanUtils.copyProperties(dto, entity);
mapper.insert(entity);
}
}
🔚 总结与展望
通过本方案,我们实现了:
- ✅ 完整的 AOP + MQ 日志解耦链路
- ✅ 支持幂等、异常重试、死信兜底
- ✅ 易扩展为 ES、Kafka、ELK 分析方案
👉 下一步建议:
- 加入日志链路追踪 TraceId
- 使用消息压缩、限流机制
- 引入 Prometheus + Grafana 监控消费速率