以前对于写接口需要不断推送数据只会用websocket,但经常又觉得麻烦且之前的业务上其实也只是后端通过websocket告诉下前端需要刷新页面了而已,理解并不需要双通道;这次基于新的业务刚好看到了SseEmitter方法,就拿来试了一下,特此记录一下比较完整的案例;
需求
需要做一个Flink的调试控制台,控制台里面写一些sql提交给Flink执行,把执行日志不断输出给前端即可(参照的是开源框架Dinky),为实现此需求,下面直接上代码
接口定义
@Operation(summary = "调试")
@PostMapping(value = "debug")
public SseEmitter debugTask(@Validated @RequestBody DataDevTaskDebugDTO dto) {
RLock lock = null;
SseEmitter emitter;
try {
lock = redissonClient.getLock(dto.getTaskId());
if (lock.tryLock()) {
emitter = dataDevTaskService.debugTask(dto);
} else {
// 获取锁失败(其他请求正在调试该任务)
emitter = new SseEmitter(0L);
emitter.completeWithError(new RuntimeException("任务正在调试中,请稍后再试"));
}
} catch (Exception e) {
emitter = new SseEmitter(0L);
emitter.completeWithError(e);
} finally {
// 释放锁,要保证锁实例不为空且当前线程持有该锁
if (lock != null && lock.isHeldByCurrentThread()) {
lock.unlock();
}
}
return emitter;
}
做了个简单的分布式锁,失败的话就以异常的方式立即终止掉SseEmitter
service实现
public SseEmitter debugTask(DataDevTaskDebugDTO dto) {
SseEmitter connection = sseService.createConnection();
asyncService.debugTask(dto.getTaskId(), dto.getTaskSql(), true);
return connection;
}
private final Map<String, SseEmitter> connectionMap = new ConcurrentHashMap<>();
private final AtomicInteger connectionIdGenerator = new AtomicInteger(0);
public SseEmitter createConnection() {
// 1. 设置连接超时时间(0=永不超时,根据业务调整)
SseEmitter sseEmitter = new SseEmitter(3 * 60 * 1000L); // 3分钟超时
// 2. 生成唯一连接ID
String connectionId = "sse_conn_" + connectionIdGenerator.incrementAndGet();
log.debug("SSE 连接创建成功,连接ID:" + connectionId + ",当前活跃连接数:" + connectionMap.size());
// 3. 存储连接,便于后续推送数据
connectionMap.put(connectionId, sseEmitter);
// 添加变量以供日志输出时找到指定sse
MDC.put(Constants.MDC_CONN_ID_KEY, connectionId);
// 4. 连接关闭/超时/异常时,移除无效连接(避免内存泄漏)
sseEmitter.onCompletion(() -> {
connectionMap.remove(connectionId);
log.debug("SSE 连接完成,连接ID:" + connectionId + ",当前活跃连接数:" + connectionMap.size());
});
sseEmitter.onTimeout(() -> {
connectionMap.remove(connectionId);
log.debug("SSE 连接超时,连接ID:" + connectionId + ",当前活跃连接数:" + connectionMap.size());
});
sseEmitter.onError((e) -> {
connectionMap.remove(connectionId);
log.debug("SSE 连接异常,连接ID:" + connectionId + ",异常:" + e.getMessage());
});
return sseEmitter;
}
异步方法就不贴了,反正是个带了异步注解的创建Flink任务的逻辑,所以基本上创建完SseEmitter之后就返回前端了,至于为什么用到了MDC.put方法也是跟日志那边的功能比较耦合了,实际上Dinky是基于切面做的,借鉴过来的同时就简化了,所以代码之间就存在不少耦合的地方就随便吧,最后就是日志的地方去过滤日志并发送给SseEmitter就行了。
日志发送
public class LogSseAppender extends AppenderBase<ILoggingEvent> {
private final List<String> LOGGER_NAMES = Lists.newArrayList("com.vortex.cloud.data.flink.service.impl.FlinkServiceImpl",
"org.apache.flink.cdc.connectors.mysql.utils.OptionUtils",
"org.apache.iceberg.flink.sink.FlinkSink",
"org.apache.flink.client.program.rest.RestClusterClient",
"org.apache.flink.runtime.taskmanager.Task");
@Override
protected void append(ILoggingEvent event) {
// Logback 中通过 getMDCPropertyMap() 获取 MDC 数据(对应 Log4j 的 contextData)
if (event.getMDCPropertyMap().containsKey(Constants.MDC_CONN_ID_KEY)
&& LOGGER_NAMES.contains(event.getLoggerName())) {
String connectionId = event.getMDCPropertyMap().get(Constants.MDC_CONN_ID_KEY);
String log = event.getFormattedMessage();
String loggerName = event.getLoggerName();
SseService sseService = SpringContextHolder.getBean(SseServiceImpl.class);
sseService.pushToSingle(connectionId, log, loggerName);
}
}
}
@Override
public void pushToSingle(String connectionId, String data, String eventName) {
SseEmitter emitter = connectionMap.get(connectionId);
if (emitter == null) {
log.debug("指定 SSE 连接不存在,连接ID:" + connectionId);
return;
}
try {
SseEmitter.SseEventBuilder event = SseEmitter.event()
.name(eventName)
.data(data)
.id(String.valueOf(System.currentTimeMillis()));
emitter.send(event);
log.debug("向指定连接 " + connectionId + " 推送数据:" + data);
} catch (IOException e) {
connectionMap.remove(connectionId);
log.error("向指定连接 " + connectionId + " 推送失败,已移除连接:" + e.getMessage());
}
}
通过这边的if就能看到当时为什么构建SseEmitter的时候有个存储的代码,然后这边就根据连接的id去把日志输出出去就行了,也比较粗糙吧
大体实现就是这样,结束,好多东西也是第一次用,借鉴了Dinky但实际业务暂时还用不到他那么复杂,所以也简化了不少代码,优化啥的等业务实际开始用了在慢慢优化吧
652

被折叠的 条评论
为什么被折叠?



