FluxSink实例及解析

本文详细探讨了Reactor框架中FluxSink的工作原理及其实现细节,包括其内部类如BufferAsyncSink的作用机制,以及如何通过不同策略处理背压问题。

本文主要研究一下FluxSink的机制

FluxSink

reactor-core-3.1.3.RELEASE-sources.jar!/reactor/core/publisher/FluxSink.java

/**
 * Wrapper API around a downstream Subscriber for emitting any number of
 * next signals followed by zero or one onError/onComplete.
 * <p>
 * @param <T> the value type
 */
public interface FluxSink<T> {

	/**
     * @see Subscriber#onComplete()
     */
    void complete();

	/**
	 * Return the current subscriber {@link Context}.
	 * <p>
	 *   {@link Context} can be enriched via {@link Flux#subscriberContext(Function)}
	 *   operator or directly by a child subscriber overriding
	 *   {@link CoreSubscriber#currentContext()}
	 *
	 * @return the current subscriber {@link Context}.
	 */
	Context currentContext();

    /**
     * @see Subscriber#onError(Throwable)
     * @param e the exception to signal, not null
     */
    void error(Throwable e);

    /**
     * Try emitting, might throw an unchecked exception.
     * @see Subscriber#onNext(Object)
     * @param t the value to emit, not null
     */
    FluxSink<T> next(T t);

	/**
	 * The current outstanding request amount.
	 * @return the current outstanding request amount
	 */
	long requestedFromDownstream();

	/**
	 * Returns true if the downstream cancelled the sequence.
	 * @return true if the downstream cancelled the sequence
	 */
	boolean isCancelled();

	/**
	 * Attaches a {@link LongConsumer} to this {@link FluxSink} that will be notified of
	 * any request to this sink.
	 * <p>
	 * For push/pull sinks created using {@link Flux#create(java.util.function.Consumer)}
	 * or {@link Flux#create(java.util.function.Consumer, FluxSink.OverflowStrategy)},
	 * the consumer
	 * is invoked for every request to enable a hybrid backpressure-enabled push/pull model.
	 * When bridging with asynchronous listener-based APIs, the {@code onRequest} callback
	 * may be used to request more data from source if required and to manage backpressure
	 * by delivering data to sink only when requests are pending.
	 * <p>
	 * For push-only sinks created using {@link Flux#push(java.util.function.Consumer)}
	 * or {@link Flux#push(java.util.function.Consumer, FluxSink.OverflowStrategy)},
	 * the consumer is invoked with an initial request of {@code Long.MAX_VALUE} when this method
	 * is invoked.
	 *
	 * @param consumer the consumer to invoke on each request
	 * @return {@link FluxSink} with a consumer that is notified of requests
	 */
	FluxSink<T> onRequest(LongConsumer consumer);

	/**
	 * Associates a disposable resource with this FluxSink
	 * that will be disposed in case the downstream cancels the sequence
	 * via {@link org.reactivestreams.Subscription#cancel()}.
	 * @param d the disposable callback to use
	 * @return the {@link FluxSink} with resource to be disposed on cancel signal
	 */
	FluxSink<T> onCancel(Disposable d);

	/**
	 * Associates a disposable resource with this FluxSink
	 * that will be disposed on the first terminate signal which may be
	 * a cancel, complete or error signal.
	 * @param d the disposable callback to use
	 * @return the {@link FluxSink} with resource to be disposed on first terminate signal
	 */
	FluxSink<T> onDispose(Disposable d);

	/**
	 * Enumeration for backpressure handling.
	 */
	enum OverflowStrategy {
		/**
		 * Completely ignore downstream backpressure requests.
		 * <p>
		 * This may yield {@link IllegalStateException} when queues get full downstream.
		 */
		IGNORE,
		/**
		 * Signal an {@link IllegalStateException} when the downstream can't keep up
		 */
		ERROR,
		/**
		 * Drop the incoming signal if the downstream is not ready to receive it.
		 */
		DROP,
		/**
		 * Downstream will get only the latest signals from upstream.
		 */
		LATEST,
		/**
		 * Buffer all signals if the downstream can't keep up.
		 * <p>
		 * Warning! This does unbounded buffering and may lead to {@link OutOfMemoryError}.
		 */
		BUFFER
	}
}
复制代码

注意OverflowStrategy.BUFFER使用的是一个无界队列,需要额外注意OOM问题

实例

    public static void main(String[] args) throws InterruptedException {
        final Flux<Integer> flux = Flux.<Integer> create(fluxSink -> {
            //NOTE sink:class reactor.core.publisher.FluxCreate$SerializedSink
            LOGGER.info("sink:{}",fluxSink.getClass());
            while (true) {
                LOGGER.info("sink next");
                fluxSink.next(ThreadLocalRandom.current().nextInt());
            }
        }, FluxSink.OverflowStrategy.BUFFER);

        //NOTE flux:class reactor.core.publisher.FluxCreate,prefetch:-1
        LOGGER.info("flux:{},prefetch:{}",flux.getClass(),flux.getPrefetch());

        flux.subscribe(e -> {
            LOGGER.info("subscribe:{}",e);
            try {
                TimeUnit.SECONDS.sleep(10);
            } catch (InterruptedException e1) {
                e1.printStackTrace();
            }
        });

        TimeUnit.MINUTES.sleep(20);
    }
复制代码

这里create创建的是reactor.core.publisher.FluxCreate,而其sink是reactor.core.publisher.FluxCreate$SerializedSink

Flux.subscribe

reactor-core-3.1.3.RELEASE-sources.jar!/reactor/core/publisher/Flux.java

	/**
	 * Subscribe {@link Consumer} to this {@link Flux} that will respectively consume all the
	 * elements in the sequence, handle errors, react to completion, and request upon subscription.
	 * It will let the provided {@link Subscription subscriptionConsumer}
	 * request the adequate amount of data, or request unbounded demand
	 * {@code Long.MAX_VALUE} if no such consumer is provided.
	 * <p>
	 * For a passive version that observe and forward incoming data see {@link #doOnNext(java.util.function.Consumer)},
	 * {@link #doOnError(java.util.function.Consumer)}, {@link #doOnComplete(Runnable)}
	 * and {@link #doOnSubscribe(Consumer)}.
	 * <p>For a version that gives you more control over backpressure and the request, see
	 * {@link #subscribe(Subscriber)} with a {@link BaseSubscriber}.
	 * <p>
	 * Keep in mind that since the sequence can be asynchronous, this will immediately
	 * return control to the calling thread. This can give the impression the consumer is
	 * not invoked when executing in a main thread or a unit test for instance.
	 *
	 * <p>
	 * <img class="marble" src="https://raw.githubusercontent.com/reactor/reactor-core/v3.1.3.RELEASE/src/docs/marble/subscribecomplete.png" alt="">
	 *
	 * @param consumer the consumer to invoke on each value
	 * @param errorConsumer the consumer to invoke on error signal
	 * @param completeConsumer the consumer to invoke on complete signal
	 * @param subscriptionConsumer the consumer to invoke on subscribe signal, to be used
	 * for the initial {@link Subscription#request(long) request}, or null for max request
	 *
	 * @return a new {@link Disposable} that can be used to cancel the underlying {@link Subscription}
	 */
	public final Disposable subscribe(
			@Nullable Consumer<? super T> consumer,
			@Nullable Consumer<? super Throwable> errorConsumer,
			@Nullable Runnable completeConsumer,
			@Nullable Consumer<? super Subscription> subscriptionConsumer) {
		return subscribeWith(new LambdaSubscriber<>(consumer, errorConsumer,
				completeConsumer,
				subscriptionConsumer));
	}

	@Override
	public final void subscribe(Subscriber<? super T> actual) {
		onLastAssembly(this).subscribe(Operators.toCoreSubscriber(actual));
	}
复制代码

创建的是LambdaSubscriber,最后调用FluxCreate.subscribe

FluxCreate.subscribe

reactor-core-3.1.3.RELEASE-sources.jar!/reactor/core/publisher/FluxCreate.java

	public void subscribe(CoreSubscriber<? super T> actual) {
		BaseSink<T> sink = createSink(actual, backpressure);

		actual.onSubscribe(sink);
		try {
			source.accept(
					createMode == CreateMode.PUSH_PULL ? new SerializedSink<>(sink) :
							sink);
		}
		catch (Throwable ex) {
			Exceptions.throwIfFatal(ex);
			sink.error(Operators.onOperatorError(ex, actual.currentContext()));
		}
	}
	static <T> BaseSink<T> createSink(CoreSubscriber<? super T> t,
			OverflowStrategy backpressure) {
		switch (backpressure) {
			case IGNORE: {
				return new IgnoreSink<>(t);
			}
			case ERROR: {
				return new ErrorAsyncSink<>(t);
			}
			case DROP: {
				return new DropAsyncSink<>(t);
			}
			case LATEST: {
				return new LatestAsyncSink<>(t);
			}
			default: {
				return new BufferAsyncSink<>(t, Queues.SMALL_BUFFER_SIZE);
			}
		}
	}	
复制代码

先创建sink,这里创建的是BufferAsyncSink,然后调用LambdaSubscriber.onSubscribe 然后再调用source.accept,也就是调用fluxSink的lambda方法产生数据,开启stream模式

LambdaSubscriber.onSubscribe

reactor-core-3.1.3.RELEASE-sources.jar!/reactor/core/publisher/LambdaSubscriber.java

	public final void onSubscribe(Subscription s) {
		if (Operators.validate(subscription, s)) {
			this.subscription = s;
			if (subscriptionConsumer != null) {
				try {
					subscriptionConsumer.accept(s);
				}
				catch (Throwable t) {
					Exceptions.throwIfFatal(t);
					s.cancel();
					onError(t);
				}
			}
			else {
				s.request(Long.MAX_VALUE);
			}
		}
	}
复制代码

这里又调用了BufferAsyncSink的request(Long.MAX_VALUE),实际是调用BaseSink的request

		public final void request(long n) {
			if (Operators.validate(n)) {
				Operators.addCap(REQUESTED, this, n);

				LongConsumer consumer = requestConsumer;
				if (n > 0 && consumer != null && !isCancelled()) {
					consumer.accept(n);
				}
				onRequestedFromDownstream();
			}
		}
复制代码

这里的onRequestedFromDownstream调用了BufferAsyncSink的onRequestedFromDownstream

		@Override
		void onRequestedFromDownstream() {
			drain();
		}
复制代码

调用的是BufferAsyncSink的drain

BufferAsyncSink.drain

		void drain() {
			if (WIP.getAndIncrement(this) != 0) {
				return;
			}

			int missed = 1;
			final Subscriber<? super T> a = actual;
			final Queue<T> q = queue;

			for (; ; ) {
				long r = requested;
				long e = 0L;

				while (e != r) {
					if (isCancelled()) {
						q.clear();
						return;
					}

					boolean d = done;

					T o = q.poll();

					boolean empty = o == null;

					if (d && empty) {
						Throwable ex = error;
						if (ex != null) {
							super.error(ex);
						}
						else {
							super.complete();
						}
						return;
					}

					if (empty) {
						break;
					}

					a.onNext(o);

					e++;
				}

				if (e == r) {
					if (isCancelled()) {
						q.clear();
						return;
					}

					boolean d = done;

					boolean empty = q.isEmpty();

					if (d && empty) {
						Throwable ex = error;
						if (ex != null) {
							super.error(ex);
						}
						else {
							super.complete();
						}
						return;
					}
				}

				if (e != 0) {
					Operators.produced(REQUESTED, this, e);
				}

				missed = WIP.addAndGet(this, -missed);
				if (missed == 0) {
					break;
				}
			}
		}
复制代码

这里的queue是创建BufferAsyncSink指定的,默认是Queues.SMALL_BUFFER_SIZE(Math.max(16,Integer.parseInt(System.getProperty("reactor.bufferSize.small", "256")))) 而这里的onNext则是同步调用LambdaSubscriber的consumer

FluxCreate.subscribe#source.accept

source.accept(
					createMode == CreateMode.PUSH_PULL ? new SerializedSink<>(sink) :
							sink);
复制代码

CreateMode.PUSH_PULL这里对sink包装为SerializedSink,然后调用Flux.create自定义的lambda consumer

fluxSink -> {
            //NOTE sink:class reactor.core.publisher.FluxCreate$SerializedSink
            LOGGER.info("sink:{}",fluxSink.getClass());
            while (true) {
                LOGGER.info("sink next");
                fluxSink.next(ThreadLocalRandom.current().nextInt());
            }
        }
复制代码

之后就开启数据推送

SerializedSink.next

reactor-core-3.1.3.RELEASE-sources.jar!/reactor/core/publisher/FluxCreate.java#SerializedSink.next

		public FluxSink<T> next(T t) {
			Objects.requireNonNull(t, "t is null in sink.next(t)");
			if (sink.isCancelled() || done) {
				Operators.onNextDropped(t, sink.currentContext());
				return this;
			}
			if (WIP.get(this) == 0 && WIP.compareAndSet(this, 0, 1)) {
				try {
					sink.next(t);
				}
				catch (Throwable ex) {
					Operators.onOperatorError(sink, ex, t, sink.currentContext());
				}
				if (WIP.decrementAndGet(this) == 0) {
					return this;
				}
			}
			else {
				Queue<T> q = queue;
				synchronized (this) {
					q.offer(t);
				}
				if (WIP.getAndIncrement(this) != 0) {
					return this;
				}
			}
			drainLoop();
			return this;
		}
复制代码

这里调用BufferAsyncSink.next,然后drainLoop之后才返回

BufferAsyncSink.next

		public FluxSink<T> next(T t) {
			queue.offer(t);
			drain();
			return this;
		}
复制代码

这里将数据放入queue中,然后调用drain取数据,同步调用LambdaSubscriber的onNext

reactor-core-3.1.3.RELEASE-sources.jar!/reactor/core/publisher/LambdaSubscriber.java

	@Override
	public final void onNext(T x) {
		try {
			if (consumer != null) {
				consumer.accept(x);
			}
		}
		catch (Throwable t) {
			Exceptions.throwIfFatal(t);
			this.subscription.cancel();
			onError(t);
		}
	}
复制代码

即同步调用自定义的subscribe方法,实例中除了log还会sleep,这里是同步阻塞的 这里调用完之后,fluxSink这里的next方法返回,然后继续循环

fluxSink -> {
            //NOTE sink:class reactor.core.publisher.FluxCreate$SerializedSink
            LOGGER.info("sink:{}",fluxSink.getClass());
            while (true) {
                LOGGER.info("sink next");
                fluxSink.next(ThreadLocalRandom.current().nextInt());
            }
        }
复制代码

小结

fluxSink这里看是无限循环next产生数据,实则不用担心,如果subscribe与fluxSink都是同一个线程的话(本实例都是在main线程),它们是同步阻塞调用的。

subscribe的时候调用LambdaSubscriber.onSubscribe,request(N)请求数据,然后再调用source.accept,也就是调用fluxSink的lambda方法产生数据,开启stream模式

这里的fluxSink.next里头阻塞调用了subscribe的consumer,返回之后才继续循环。

至于BUFFER模式OOM的问题,可以思考下如何产生。

package com.hnzmr.portal.controller.digitalhuman; import cn.hutool.core.date.StopWatch; import com.alibaba.fastjson.JSONObject; import com.hnzmr.commons.exception.DreamerException; import com.hnzmr.portal.constant.DigitalHumanRedisConstant; import com.hnzmr.portal.service.digitalhuman.DigitalHumanSmartQAService; import com.hnzmr.portal.util.sign.RequireSignature; import com.hnzmr.video.api.digitalhuman.facade.DigitalHumanDeviceBindingFacade; import com.hnzmr.video.api.digitalhuman.facade.DigitalHumanSmartQAFacade; import com.hnzmr.video.api.digitalhuman.request.DigitalHumanSmartQATextReq; import com.hnzmr.video.api.digitalhuman.request.DigitalHumanStreamAccessTokenReq; import com.hnzmr.video.api.digitalhuman.request.QueryDigitalHumanDeviceBindingReq; import com.hnzmr.video.api.digitalhuman.response.DigitalHumanSmartQARsp; import com.hnzmr.video.api.digitalhuman.response.QueryDigitalHumanDeviceBindingRsp; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import lombok.extern.slf4j.Slf4j; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.http.MediaType; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; import org.springframework.web.servlet.mvc.method.annotation.SseEmitter; import javax.servlet.http.HttpServletRequest; import java.time.Duration; import java.util.concurrent.TimeUnit; @Slf4j @RestController @RequestMapping("digitalHumanSmartQA") @Api(tags = "数字人智能问答接口") public class DigitalHumanSmartQAController { public static final String LOG_PREFIX = "[数字人智能问答]"; @Autowired private DigitalHumanSmartQAFacade digitalHumanSmartQAFacade; @Autowired private DigitalHumanDeviceBindingFacade digitalHumanDeviceBindingFacade; @Autowired private DigitalHumanSmartQAService digitalHumanSmartQAService; @Autowired private RedisTemplate<String, Object> redisTemplate; @RequireSignature @ApiOperation("【验签接口】智能问答文本接口,处理完返回文本,速度慢") @PostMapping("/smartQAText") DigitalHumanSmartQARsp smartQAText(@RequestBody DigitalHumanSmartQATextReq smartQATextReq, HttpServletRequest request) throws Exception { log.info("==================获取智能问答文本接口,提问内容:{}=============", smartQATextReq.getUtterance()); digitalHumanDeviceBinding(smartQATextReq); DigitalHumanStreamAccessTokenReq digitalHumanStreamAccessTokenReq = getDigitalHumanStreamAccessTokenReq(smartQATextReq); return digitalHumanSmartQAFacade.smartQAText(digitalHumanStreamAccessTokenReq); } // @RequireSignature @ApiOperation("【验签接口】智能问答文本接口,直接返回流式会话,速度快") @PostMapping(value = "/smartQAStreaming", produces = MediaType.TEXT_EVENT_STREAM_VALUE) public SseEmitter smartQAStreaming( @RequestBody DigitalHumanSmartQATextReq smartQATextReq, HttpServletRequest request) throws Exception { StopWatch stopWatch = new StopWatch(); log.info("==================获取智能问答流式会话,提问内容:{}=============", smartQATextReq.getUtterance()); // 设备绑定校验,确保请求来源合法 stopWatch.start("设备绑定校验"); digitalHumanDeviceBinding(smartQATextReq); stopWatch.stop(); // 构建流式访问令牌请求对象,用于后续的流式会话验证 stopWatch.start("构建流式访问令牌请求对象"); DigitalHumanStreamAccessTokenReq accessTokenReq = getDigitalHumanStreamAccessTokenReq(smartQATextReq); stopWatch.stop(); accessTokenReq.setDeviceId(smartQATextReq.getDeviceId()); accessTokenReq.setDigitalHumanAppId(smartQATextReq.getDigitalHumanAppId()); // 创建SSE发射器并设置超时时间(100分钟)根据阿里云设置 SseEmitter emitter = new SseEmitter(6000_000L); // 启动流式问答处理流程,结果将通过SSE发射器实时推送 stopWatch.start("启动流式问答处理流程"); digitalHumanSmartQAService.smartQAStreaming(accessTokenReq, emitter); stopWatch.stop(); log.info("==================获取智能问答流式会话结束,耗时:{}=============", stopWatch.prettyPrint(TimeUnit.MILLISECONDS)); return emitter; } /** * 获取数字人智能问答流访问令牌请求对象 * * @param smartQATextReq 数字人智能问答文本请求对象,包含用户提问内容和设备ID等信息 * @return DigitalHumanStreamAccessTokenReq 流访问令牌请求对象,已设置提问内容和设备ID * @throws DreamerException 当提问内容为空时抛出参数错误异常 */ private DigitalHumanStreamAccessTokenReq getDigitalHumanStreamAccessTokenReq(DigitalHumanSmartQATextReq smartQATextReq) { // 校验提问内容不能为空 if (smartQATextReq.getUtterance() == null || smartQATextReq.getUtterance().trim().isEmpty()) { throw DreamerException.videoParamError(" 提问内容不能为空"); } DigitalHumanStreamAccessTokenReq digitalHumanStreamAccessTokenReq; String redisKey = DigitalHumanRedisConstant.VIDEO_DIGITAL_HUMAN_SMART_QA; // 从Redis缓存获取或申请新的流访问令牌 if (redisTemplate.hasKey(redisKey)) { log.info(LOG_PREFIX + "获取申请流访问令牌信息:缓存redis命中,直接使用缓存数据"); digitalHumanStreamAccessTokenReq = JSONObject.parseObject((String) redisTemplate.opsForValue().get(redisKey), DigitalHumanStreamAccessTokenReq.class); } else { digitalHumanStreamAccessTokenReq = digitalHumanSmartQAFacade.applyForStreamAccessToken(); // 将新申请的令牌信息缓存90分钟 redisTemplate.opsForValue().set(redisKey, JSONObject.toJSONString(digitalHumanStreamAccessTokenReq), Duration.ofHours(1).plusMinutes(30)); } // 设置请求对象中的提问内容和设备ID if (digitalHumanStreamAccessTokenReq != null) { digitalHumanStreamAccessTokenReq.setUtterance(smartQATextReq.getUtterance()); digitalHumanStreamAccessTokenReq.setDeviceId(smartQATextReq.getDeviceId()); } return digitalHumanStreamAccessTokenReq; } /** * 检查数字人与设备的绑定关系 * 该方法用于验证指定设备是否已正确绑定数字人AppId。若未绑定,则抛出异常。 * * @param smartQATextReq 智能问答请求对象,包含需要验证的设备ID * @throws DreamerException 当设备未绑定数字人AppId时抛出,包含错误信息"设备未绑定数字人AppId" */ private void digitalHumanDeviceBinding(DigitalHumanSmartQATextReq smartQATextReq) { // 构建设备绑定查询请求 QueryDigitalHumanDeviceBindingReq bindingReq = new QueryDigitalHumanDeviceBindingReq(); bindingReq.setDeviceId(smartQATextReq.getDeviceId()); // 查询设备绑定状态 QueryDigitalHumanDeviceBindingRsp query = digitalHumanDeviceBindingFacade.query(bindingReq); // 验证绑定结果 if (query == null) { throw DreamerException.videoParamError("设备未绑定数字人AppId"); } } } package com.hnzmr.portal.service.digitalhuman; import com.hnzmr.video.api.digitalhuman.request.DigitalHumanStreamAccessTokenReq; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.servlet.mvc.method.annotation.SseEmitter; public interface DigitalHumanSmartQAService { void smartQAStreaming(@RequestBody DigitalHumanStreamAccessTokenReq accessTokenReq, SseEmitter emitter); } package com.hnzmr.portal.service.digitalhuman.impl; import cn.hutool.core.date.StopWatch; import com.alibaba.fastjson.JSON; import com.hnzmr.portal.base.ChannelMsgSender; import com.hnzmr.portal.chatbot.DigitalHumanChatbotSSEReq; import com.hnzmr.portal.chatbot.SSEEventSourceListener; import com.hnzmr.portal.client.JavaChatContext; import com.hnzmr.portal.client.JavaClientMsgSender; import com.hnzmr.portal.service.digitalhuman.DigitalHumanSmartQAService; import com.hnzmr.video.api.digitalhuman.request.DigitalHumanStreamAccessTokenReq; import lombok.extern.slf4j.Slf4j; import okhttp3.MediaType; import okhttp3.OkHttpClient; import okhttp3.Request; import okhttp3.RequestBody; import okhttp3.sse.EventSource; import okhttp3.sse.EventSources; import org.springframework.beans.factory.annotation.Value; import org.springframework.stereotype.Service; import org.springframework.web.servlet.mvc.method.annotation.SseEmitter; import java.util.concurrent.TimeUnit; /** * @author : nzy * @className : DigitalHumanSmartQAServiceImpl * @description : 一句话描述该类的功能 * @createDate : 2025年06月10日 14:26 * @updateUser : 修改人 * @updateDate : 修改时间 * @updateRemark : 初始创建 */ @Service @Slf4j public class DigitalHumanSmartQAServiceImpl implements DigitalHumanSmartQAService { // 响应数据处理媒体类型 private static final MediaType JSON_HEADER = MediaType.parse("application/json; charset=utf-8"); // HTTP客户端实例,用于发送HTTP请求 static OkHttpClient httpClient = new OkHttpClient.Builder() .readTimeout(10, TimeUnit.SECONDS) .writeTimeout(10, TimeUnit.SECONDS) .connectTimeout(10, TimeUnit.SECONDS) .build(); // 服务基础URL路径,通过Spring配置注入 @Value("${ali.chatbot.baseUrl}") private String baseUrl; // 数字人实例ID,通过Spring配置注入 @Value("${ali.chatbot.instanceId}") private String instanceId; @Override public void smartQAStreaming(DigitalHumanStreamAccessTokenReq req, SseEmitter emitter) { String text = req.getUtterance(); StopWatch stopWatch = new StopWatch(); try { if (text != null) { //机器人接收消息内容 JavaChatContext context = new JavaChatContext(); context.setSessionId(req.getDigitalHumanAppId() + req.getDeviceId()); context.setQuery(text); context.setSseEmitter(emitter); ChannelMsgSender sender = new JavaClientMsgSender(context); sender.preSengMsg(); // 调用大模型 stopWatch.start("调用大模型"); invokeLlm(req, sender); stopWatch.stop(); log.info("调用大模型耗时:{}", stopWatch.prettyPrint(TimeUnit.MILLISECONDS)); } } catch (Exception e) { log.error("receive message by error:" + e.getMessage(), e); } } public void invokeLlm(DigitalHumanStreamAccessTokenReq req, ChannelMsgSender sender){ DigitalHumanChatbotSSEReq reqData = DigitalHumanChatbotSSEReq.build( instanceId, req.getUtterance(), req.getDigitalHumanAppId()+req.getDeviceId()); String url = String.format(baseUrl, req.getAccessToken(), req.getChannelId(), req.getSign(), req.getTimestamp()); EventSource.Factory factory = EventSources.createFactory(httpClient); String json = JSON.toJSONString(reqData); RequestBody body = RequestBody.create(JSON_HEADER, json); // 请求对象 Request request = new Request.Builder().url(url).post(body).build(); SSEEventSourceListener eventSourceListener = new SSEEventSourceListener(sender); factory.newEventSource(request, eventSourceListener); } } package com.hnzmr.portal.chatbot; import com.alibaba.fastjson2.JSON; import com.alibaba.fastjson2.JSONArray; import com.alibaba.fastjson2.JSONObject; import com.hnzmr.portal.base.ChannelMsgSender; import com.hnzmr.portal.client.JavaChatContext; import com.hnzmr.portal.client.JavaClientMsgSender; import lombok.extern.slf4j.Slf4j; import okhttp3.Response; import okhttp3.sse.EventSource; import okhttp3.sse.EventSourceListener; @Slf4j public class SSEEventSourceListener extends EventSourceListener { /** * 消息通道发送器,用于将处理后的消息发送到指定通道 */ private final ChannelMsgSender channelMsgSender; /** * 构造函数 * * @param channelMsgSender 消息通道发送器实例 */ public SSEEventSourceListener(ChannelMsgSender channelMsgSender) { this.channelMsgSender = channelMsgSender; } /** * SSE连接打开时的回调处理 * * @param eventSource 事件源对象 * @param response HTTP响应对象 */ @Override public void onOpen(EventSource eventSource, Response response) { log.info("onOpen: {}", response); super.onOpen(eventSource, response); } /** * 接收到SSE事件时的处理 * * @param eventSource 事件源对象 * @param id 事件ID * @param type 事件类型 * @param data 事件数据 */ @Override public void onEvent(EventSource eventSource, String id, String type, String data) { parseResponse(data); super.onEvent(eventSource, id, type, data); } private void parseResponse(String message) { try { channelMsgSender.sendMsg(message); } catch (Exception e) { log.info(" 解析大模型返回数据异常:{}", message); log.error(" 解析大模型返回数据异常", e); } } /** * SSE连接关闭时的回调处理 * * @param eventSource 事件源对象 */ @Override public void onClosed(EventSource eventSource) { log.info("onClosed: {}", eventSource); channelMsgSender.postSendMsg(); // 流结束,停止推流 stopPushingStream(); super.onClosed(eventSource); } /** * SSE连接失败时的回调处理 * * @param eventSource 事件源对象 * @param t 异常对象 * @param response HTTP响应对象(可能为null) */ @Override public void onFailure(EventSource eventSource, Throwable t, Response response) { log.info("onFailure: {}, response:{}", JSON.toJSONString(eventSource), JSON.toJSONString(response)); if (t != null) { t.printStackTrace(); } // 流异常,停止推流 stopPushingStream(); super.onFailure(eventSource, t, response); } /** * 停止推流的方法 */ private void stopPushingStream() { // 这里可以添加停止推流的具体逻辑 // 例如,关闭 SseEmitter try { JavaChatContext context = ((JavaClientMsgSender) channelMsgSender).getContext(); if (context.getSseEmitter() != null) { context.getSseEmitter().complete(); log.info(" 推流已停止"); } } catch (Exception e) { log.error(" 停止推流时发生错误", e); } } } package com.hnzmr.portal.client; import com.hnzmr.portal.base.ChannelMsgSender; import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang3.BooleanUtils; import java.io.IOException; /** * @author kelin * @date 2024/6/14 */ @Slf4j public class JavaClientMsgSender extends ChannelMsgSender<JavaChatContext> { public JavaClientMsgSender(JavaChatContext context) { super(context); } public JavaChatContext getContext() { return context; } @Override public void preSengMsg() { } @Override public void sendMsg(String originResp) throws Exception { try { context.getSseEmitter().send(originResp); } catch (Exception e) { log.error("sse发送异常", e); } } @Override public void postSendMsg() { } 根据下面我说的内容,给我代码 代码场景 1、用户端调用我的接口 2、我组装数据调用阿里云SSE流式回话 3、将接收到的SSE流响应给用户端 我的代码spring-boot-starter-parent版本是2.3.4,jdk是1.8 分析我的代码, 我的代码spring-boot-starter-parent版本是2.3.4,jdk是1.8 大概问题是这些: 1、SSE连接生命周期问题:当SSE连接被关闭后(客户端断开、超时或主动完成),仍然尝试发送消息 2、缺乏线程安全保护:多线程并发访问同一个SSE连接导致状态冲突 3、错误处理不完善:未正确处理连接已关闭的情况 请帮我解决
最新发布
06-27
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值