Sequential Task Process based on Spring Event Framework

本文介绍如何利用观察者模式改进任务处理流程,通过解耦任务处理与工作流控制,实现更灵活的任务调度机制。文章详细阐述了任务模型的设计,并展示了如何借助Spring事件框架实现异步处理。

      This is a follow-up to my last post, where I explained basic concept and usage of Observer Pattern in enterprise application. Here let's dive into a more detailed practice.

      Given that you have a series sequential tasks(simple directed graph, no cycle inside) as below need to be processed, one root task has multiple sub tasks, and only after the pre task has been processed successfully will a sub task start to be handled.



        First, we need to define the task model(s) inside the sequential process framework to distinguish from that in application, the basic data structure of the task model contains basic task object oid, task relationships(pre tasks and sub tasks), task location(root task or not), as well as task type(for example, email task, fax task etc.), so we've got the structure described in below Java code:

public class LightweightTask implements java.io.Serializable {

	private long taskOid;
	private String taskType;
      private boolean isRootTask;
	private List<LightweightTask> subTasks = new ArrayList<LightweightTask>();
        private List<LightweightTask> preTasks = new ArrayList<LightweightTask>();

        public LightweightTask(long taskOid, String taskType, boolean isRootTask) {
               this.taskOid = taskOid;
               this.isRootTask = isRootTask;
               this.taskType = taskType;
        }
        // cut for brevity
}

       Different task needs different task processor, I've define a common interface as an identify of task processor, all we need to do is implement it and process corresponding type of task. Of course I've got a dispatcher class which will accept a String parameter(stands for task type) and return a task processor, this could be managed in spring by map injection, will not involved here.

       OK, so let's continue the main story, we've got the tasks, and processors, so what should we do? From the root task, process it, and then if success, for each of its sub task, and repeat process like before, we also need to handle the exceptional case, concurrent issue e.g., maybe we need to reprocess that task again..

       This approach really works, but any improvements? Review the solution above, we can see some drawbacks, the task process and workflow control are tightly coupled, all tasks are processed synchronizely...maybe we can find another way, yes, Observer Pattern!

      So, imagine that our task workflow somehow is like a pipeline, every task inside it is like a single stage, and the stage always starts from an input stream and ends with an output stream, hence we can picture three kind of events in our case: InputStreamEvent, OutputStreamEvent and ExceptionStreamEvent. And the whole flow is like below:



     The workflow starts from root stage, it will act in this way:

     1. Publish an input event for current stage(task).

     2. Input event listener will process the event.

     3. If 2 got exception, then publish an exception event.

     4. If the root cause of the exception is concurrent issue, then re-publish the input stream again, else exit the process.

     5. If 2 process successfully with no exception, then publish an output event.

     6. Output event listener convert the output event into multiple input event, and publish them.


     So, here we have it. Now, for the implementation, I'd like to do it with the help of Spring Event Framework. And the InputStreamEvent will be like:

/**
 * @ClassName: InputStreamEvent
 * @Description: Java event used to monitor the process stream within task
 *               pipeline network, attention that, no inheritance should be used
 *               within Event POJO(s), which would lead to event multi-processings
 * @author ZHUGA3
 * 
 * @date Oct 23, 2013
 */
public class InputStreamEvent extends ApplicationEvent {

	private LightweightTask currentStage;
	/**
	 * @param currentStage
	 */
	public InputStreamEvent(LightweightTask currentStage) {
		// Attention: Spring Event FWK takes Event Type and Event Source Object
		// Type as an unique key for event listener, here we only care about the
		// event type, hence simply use Object type as the event source type
		super(new Object());
		this.currentStage = currentStage;
	}

	public LightweightTask getCurrentStage() {
		return currentStage;
	}
	public void setCurrentStage(LightweightTask currentStage) {
		this.currentStage = currentStage;
	}
	private static final long serialVersionUID = -6855044300379366418L;
}

      And the input listener will be like: 

public class InputStreamEventListener extends
		AbstractEventListener<InputStreamEvent> {

	@Override
	public void onApplicationEvent(InputStreamEvent event) {
		final LightweightTask lTask = event.getCurrentStage();

		if (!InterfaceTaskStatus.SUCCESS.toString().equals(
				getITaskStatus(lTask.getTaskOid()))) {
			try {
				// Here we need inside exception if any to distinguish whether
				// the process failure comes after an concurrent issue
				new LBPFWriteTransaction() {
					@Override
					protected void process() {
						String processorClass = LbpfProcessorDispatcherManager.instance
								.getTaskProcessor(lTask.getActionType());
						try {
							LbpfProcessor processor = ((LbpfProcessor) Class
									.forName(processorClass).newInstance());
							processor.process(lTask.getTaskOid());
							updateITaskstatus(
									lTask.getTaskOid(),
									processor.isProcessorSynchronized() ? InterfaceTaskStatus.SUCCESS
											.toString()
											: InterfaceTaskStatus.PENDING
													.toString());
						} catch (Exception e) {
							throw new RuntimeException(e);
						}
					}
				}.execute();
			} catch (Exception e) {
				EventManager.instance.publishEvent(new ExceptionStreamEvent(
						lTask, e));
				return;
			}
		}

		if (InterfaceTaskStatus.SUCCESS.toString().equals(
				getITaskStatus(lTask.getTaskOid()))) {
			EventManager.instance.publishEvent(new OutputStreamEvent(lTask));
		}
	}
}

      Other models and listeners will be like these two, and from the code above, you can see Spring event listener is designed with the help of generic type, so do not use inheritance when design event models unless you have special requirement that one event should be processed by multiple different listeners.

      At last, don't forget to register the listener(s) in Spring for initialization when the context starts up.

      So, In this way, we've finished the implementation in a total different way, already get rid of the coupling, so what about asynchronized process? Any chance for tasks where there's no dependency with each other, like task2 and task 3? 

      Of course, Spring has build-in supports for both synchronized and asynchronized listener, what should we do is just to configure an executor(like a thread pool) for it, pretty cool, right?

 



【SCI复现】含可再生能源与储能的区域微电网最优运行:应对不确定性的解鲁棒性与非预见性研究(Matlab代码实现)内容概要:本文围绕含可再生能源与储能的区域微电网最优运行展开研究,重点探讨应对不确定性的解鲁棒性与非预见性策略,通过Matlab代码实现SCI论文复现。研究涵盖多阶段鲁棒调度模型、机会约束规划、需求响应机制及储能系统优化配置,结合风电、光伏等可再生能源出力的不确定性建模,提出兼顾系统经济性与鲁棒性的优化运行方案。文中详细展示了模型构建、算法设计(如C&CG算法、大M法)及仿真验证全过程,适用于微电网能量管理、电力系统优化调度等领域的科研与工程实践。; 适合人群:具备一定电力系统、优化理论和Matlab编程基础的研究生、科研人员及从事微电网、能源管理相关工作的工程技术人员。; 使用场景及目标:①复现SCI级微电网鲁棒优化研究成果,掌握应对风光负荷不确定性的建模与求解方法;②深入理解两阶段鲁棒优化、分布鲁棒优化、机会约束规划等先进优化方法在能源系统中的实际应用;③为撰写高水平学术论文或开展相关课题研究提供代码参考和技术支持。; 阅读建议:建议读者结合文档提供的Matlab代码逐模块学习,重点关注不确定性建模、鲁棒优化模型构建与求解流程,并尝试在不同场景下调试与扩展代码,以深化对微电网优化运行机制的理解。
### 基于教师-学生框架的强化学习应用于交通信号控制 在现代城市环境中,有效的交通管理对于减少拥堵至关重要。一种创新的方法是利用基于教师-学生框架的强化学习来优化交通信号灯的时间安排。 #### 教师模型的作用 教师模型通常是一个复杂的算法或预训练好的网络,能够提供高质量的行为指导。在这个场景下,教师可以是由历史数据训练得到的一个深度神经网络,该网络已经学会了如何根据不同时间段内的车流量模式调整红绿灯时间[^1]。 ```python import numpy as np class TeacherModel: def __init__(self, historical_data): self.historical_data = historical_data def predict(self, current_state): # 使用历史数据预测最优动作 optimal_action = some_complex_function(current_state, self.historical_data) return optimal_action ``` #### 学生模型的学习过程 学生模型则通过模仿教师的动作并逐渐学会独立决策。为了实现这一点,在早期阶段,学生会严格遵循教师给出的建议;随着经验积累,它开始尝试探索新的策略,并最终形成自己的判断能力。这种迁移学习的方式不仅加快了收敛速度,还提高了系统的鲁棒性和泛化性能[^2]。 ```python from keras.models import Sequential from keras.layers import Dense class StudentModel: def __init__(self): model = Sequential([ Dense(64, activation='relu', input_dim=state_space), Dense(32, activation='relu'), Dense(action_space, activation='linear') ]) model.compile(optimizer='adam', loss='mse') self.model = model def learn_from_teacher(self, state, action_by_teacher): predicted_q_values = self.model.predict(state) target_q_value = get_reward_based_on_environment_feedback() # 更新权重以最小化误差 self.model.fit(state, target_q_value, epochs=1, verbose=0) def train_student_with_teacher(student_model, teacher_model, episodes): for episode in range(episodes): initial_state = reset_traffic_simulation() while not is_episode_finished(): suggested_action = teacher_model.predict(initial_state) student_model.learn_from_teacher(initial_state, suggested_action) train_student_with_teacher(StudentModel(), TeacherModel(historical_traffic_patterns), num_of_episodes) ``` #### 实际案例分析 实际部署时,可以通过模拟器测试不同配置下的效果,比如高峰时段和平峰时段分别采用何种参数设置最为合理。此外,还可以引入多目标优化机制,综合考虑通行效率、安全性和环保等因素,从而达到最佳平衡状态。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值