1 简介
reactor 是一种事件驱动的并发处理模式,常用于网络服务器和事件循环系统中。它主要的功能是通过单线程或者多线程处理I/O操作,避免阻塞,并且能够高效处理大量并发的事件。
one loop per thread or process,以下摘自 reactor 原文:
The reactor software design pattern is an event handling strategy that can respond to many potential service requests concurrently. The pattern's key component is an event loop, running in a single thread or process, which demultiplexes incoming requests and dispatches them to the correct request handler.[1]
By relying on event-based mechanisms rather than blocking I/O or multi-threading, a reactor can handle many concurrent I/O bound requests with minimal delay.[2] A reactor also allows for easily modifying or expanding specific request handler routines, though the pattern does have some drawbacks and limitations.[1]
With its balance of simplicity and scalability, the reactor has become a central architectural element in several server applications and software frameworks for networking. Derivations such as the multireactor and proactor also exist for special cases where even greater throughput, performance, or request complexity are necessary.[1][2][3][4]
在此列出多路复用相关文章:
reactor框架在libevent中所处层次:
reactor层次图
1.1 工作组件
事件源:系统中会有多个事件源,例如网络套接字、文件描述符、定时器等,触发各种事件,如读、写、超时等。
事件分离器 (Demultiplexer):事件分离器(通常是系统调用,如select(), poll(), 或epoll())负责监控这些事件源,并将发生事件的事件源标记出来。
事件分派器 (Dispatcher):Reactor设计中的核心部分,事件分派器接收到事件分离器传来的事件后,将其分发给相应的处理器(Event Handler)处理。每个事件对应一个预定义的事件处理函数。
事件处理器 (Event Handler):事件处理器包含事件处理的逻辑。当事件分派器传递某个事件时,事件处理器负责处理该事件,例如处理网络连接请求,或者读取某个套接字中的数据。
1.2 工作流程
- 等待事件发生:reactor首先通过系统调用(如select()或epoll())等待某些I/O事件发生。
- 事件分离:当某个I/O事件发生时,事件分离器(select()或epoll())返回一组已经就绪的事件。
- 事件处理:事件分派器检查哪些事件已经准备好,并将这些事件交由对应的事件处理器进行处理。
- 继续监听:事件处理结束后,reactor重新回到等待事件的状态,重复此过程。
1.3 单线程 vs 多线程
- 单线程 reactor:适合处理简单的并发情况,整个流程都是在一个线程中进行,因此不需要考虑线程同步问题。然而,当处理时间较长的操作时,可能会阻塞其他事件的处理,开源软件比如redis缓存数据库。
- 多线程 reactor:将I/O事件和实际事件处理分开。reactor在单线程中监听和分派事件,而将事件处理分配给工作线程(Thread Pool)。这样可以避免阻塞,提高并发处理能力,开源软件如memcached缓存数据库。
1.4 reactor 和 proactor
- reactor 是同步非阻塞模型,事件循环等待事件发生,当某个事件准备好后,交给处理器进行处理。
- proactor 则是异步模型,事件发生时由内核完成操作(如I/O操作),然后通知应用程序进行进一步处理。
2 原理
2.1 组件图
reactor相关组件图如下:
2.2 序列图
各组件工作序列图:
3 reactor
3.1 classic service design
解释说明:
- 此为同步阻塞模式;
- 逐个处理client请求,当1个client连接成功后,read => decode => compute => encode => send,如此流程处理完毕,方可处理下一个client请求;
- 以client为并发粒度,粒度大,并发响应延迟高,不适合高并发场景,适用于mysql这种应用场景;
- handler可以是一个线程或进程;
3.2 single reactor per thread
解释说明:
- 1个线程1个reactor,1个acceptor,所有client的IO事件收集&分发&处理,均在此线程处理;
- 此线程持有1个acceptor,专门用来并发处理client的connect请求;
- 所有的IO操作和计算任务,均在此reactor线程处理;
- 并发粒度为event,而非client,并发粒度低,并且能很好的解决数据乱序问题,但不能发挥多CPU核心优势,适用于redis这种内存数据库;
- 若设计为multiple single reactor per thread,如此便可解决此模式的缺陷,既能发挥多CPU核心优势,又能适用于IO密集型,非常灵活,但若是多进程下需解决accept惊群问题,如nginx;
3.3 single reactor + work thread poll
-
解释说明:
- 此模式在single reactor per thread基础之上,将IO操作和event业务逻辑处理分离开来,由reactor线程充当acceptor和所有IO操作职责,所有计算任务由thread poll来处理;
- 当1个client请求过来,reactor的acceptor accept客户端的connect请求,然后read数据完毕,将fd和业务逻辑处理handler封装起来,投递到queued tasks中,从thread poll中分配1个线程来处理,待处理完毕,再回到reactor线程发送给client,如此循环;
- reactor线程和thread poll通过队列来通信,前者处理IO操作,后者处理业务逻辑;
- 缺点:acceptor和所有IO操作,均由reactor线程处理,瓶颈在此;
- 优点:将IO操作交由reactor线程,业务逻辑交由thread poll,可充分发挥多CPU核心优势,也可很好的解决数据乱序问题,适用于高并发场景;
- 可通过设计为multiple single reactor + work thread poll来解决以上问题;
3.4 multiple reactor + thread poll
解释说明:
- 与single reactor + work thread poll不同之处在于,此模式将reactor线程根据职责,一分为二,分离出mainReactor线程和subReactor线程,前者专门负责并发处理client的connect请求,后者则负责处理所有IO操作;
- 其他均与single reactor + work thread poll模式一致,不再赘述;
4 libevent之reactor原理
4.1 eventop IO多路复用
eventop是libevent对各平台IO多路复用相关操作的抽象:
/** Structure to define the backend of a given event_base. */
// 此为libevent对各个平台IO多路复用操作的抽象接口
struct eventop {
/** The name of this backend. */
const char *name;
/** Function to set up an event_base to use this backend. It should
* create a new structure holding whatever information is needed to
* run the backend, and return it. The returned pointer will get
* stored by event_init into the event_base.evbase field. On failure,
* this function should return NULL. */
void *(*init)(struct event_base *);
/** Enable reading/writing on a given fd or signal. 'events' will be
* the events that we're trying to enable: one or more of EV_READ,
* EV_WRITE, EV_SIGNAL, and EV_ET. 'old' will be those events that
* were enabled on this fd previously. 'fdinfo' will be a structure
* associated with the fd by the evmap; its size is defined by the
* fdinfo field below. It will be set to 0 the first time the fd is
* added. The function should return 0 on success and -1 on error.
*/
int (*add)(struct event_base *, evutil_socket_t fd, short old, short events, void *fdinfo);
/** As "add", except 'events' contains the events we mean to disable. */
int (*del)(struct event_base *, evutil_socket_t fd, short old, short events, void *fdinfo);
/** Function to implement the core of an event loop. It must see which
added events are ready, and cause event_active to be called for each
active event (usually via event_io_active or such). It should
return 0 on success and -1 on error.
*/
int (*dispatch)(struct event_base *, struct timeval *);
/** Function to clean up and free our data from the event_base. */
void (*dealloc)(struct event_base *);
/** Flag: set if we need to reinitialize the event base after we fork.
*/
int need_reinit;
/** Bit-array of supported event_method_features that this backend can
* provide. */
enum event_method_feature features;
/** Length of the extra information we should record for each fd that
has one or more active events. This information is recorded
as part of the evmap entry for each fd, and passed as an argument
to the add and del functions above.
*/
size_t fdinfo_len;
};
具体到某个平台支持哪些个IO多路用复用方式,是在预编译时决定的,由于每个平台可能会支持多个IO多路复用方式,因此,是以一个struct eventop的数组来表达的:
/* Array of backends in order of preference. */
static const struct eventop *eventops[] = {
#ifdef EVENT__HAVE_EVENT_PORTS
&evportops,
#endif
#ifdef EVENT__HAVE_WORKING_KQUEUE
&kqops,
#endif
#ifdef EVENT__HAVE_EPOLL
&epollops,
#endif
#ifdef EVENT__HAVE_DEVPOLL
&devpollops,
#endif
#ifdef EVENT__HAVE_POLL
&pollops,
#endif
#ifdef EVENT__HAVE_SELECT
&selectops,
#endif
#ifdef _WIN32
&win32ops,
#endif
NULL
};
值得一提的是,如果某个平台支持多种IO多路复用,最终libevent在选择的时候,是根据上面struct eventop *eventops[]数组中定义的优先级来决定的,数组下标越小,优先级越高。
比如linux下,所支持的IO多路复用方式有epoll / poll / select 3种,libevent优先选择epoll,其次poll,最后select。 某个IO多路复用方式尝试成功了,则不再往下尝试。
最后,是在event_base_new_with_config时决定到底用哪个IO多路复用方式的(可通过环境变量来禁用某个IO多路复用):
struct event_base *
event_base_new_with_config(const struct event_config *cfg)
{
int i;
struct event_base *base;
int should_check_environment;
#ifndef EVENT__DISABLE_DEBUG_MODE
event_debug_mode_too_late = 1;
#endif
if ((base = mm_calloc(1, sizeof(struct event_base))) == NULL) {
event_warn("%s: calloc", __func__);
return NULL;
}
if (cfg)
base->flags = cfg->flags;
// 此处略去不相关逻辑代码
......
// 按struct eventop *eventops[]数组优先级逐个尝试
for (i = 0; eventops[i] && !base->evbase; i++) {
if (cfg != NULL) {
/* determine if this backend should be avoided */
if (event_config_is_avoided_method(cfg,
eventops[i]->name))
continue;
if ((eventops[i]->features & cfg->require_features)
!= cfg->require_features)
continue;
}
/* also obey the environment variables */
// 此处是通过环境变量来禁用某个IO多路复用
if (should_check_environment &&
event_is_method_disabled(eventops[i]->name))
continue;
// 尝试用eventops[i] IO多路复用
base->evsel = eventops[i];
// 对应IO多路复用操作的上下文实例
base->evbase = base->evsel->init(base);
}
// 此处略去不相干逻辑代码
......
return (base);
}
4.2 eventloop事件循环
eventloop事件循环是reactor模式的核心组件之一,在libevent里是event_base_dispatch:
int
event_base_dispatch(struct event_base *event_base)
{
return (event_base_loop(event_base, 0));
}
event_base_dispatch实际上只是对event_base_loop的封装:
int
event_base_loop(struct event_base *base, int flags)
{
const struct eventop *evsel = base->evsel;
struct timeval tv;
struct timeval *tv_p;
int res, done, retval = 0;
/* Grab the lock. We will release it inside evsel.dispatch, and again
* as we invoke user callbacks. */
EVBASE_ACQUIRE_LOCK(base, th_base_lock);
if (base->running_loop) {
event_warnx("%s: reentrant invocation. Only one event_base_loop"
" can run on each event_base at once.", __func__);
EVBASE_RELEASE_LOCK(base, th_base_lock);
return -1;
}
base->running_loop = 1;
clear_time_cache(base);
if (base->sig.ev_signal_added && base->sig.ev_n_signals_added)
evsig_set_base_(base);
done = 0;
#ifndef EVENT__DISABLE_THREAD_SUPPORT
base->th_owner_id = EVTHREAD_GET_ID();
#endif
base->event_gotterm = base->event_break = 0;
// 此为eventloop事件循环的主体loop
while (!done) {
base->event_continue = 0;
base->n_deferreds_queued = 0;
/* Terminate the loop if we have been asked to */
if (base->event_gotterm) {
break;
}
if (base->event_break) {
break;
}
tv_p = &tv;
if (!N_ACTIVE_CALLBACKS(base) && !(flags & EVLOOP_NONBLOCK)) {
timeout_next(base, &tv_p);
} else {
/*
* if we have active events, we just poll new events
* without waiting.
*/
evutil_timerclear(&tv);
}
/* If we have no events, we just exit */
if (0==(flags&EVLOOP_NO_EXIT_ON_EMPTY) &&
!event_haveevents(base) && !N_ACTIVE_CALLBACKS(base)) {
event_debug(("%s: no events registered.", __func__));
retval = 1;
goto done;
}
// 将later queue中的event挪到active queue,然后执行之
event_queue_make_later_events_active(base);
clear_time_cache(base);
// 此处为IO多路复用的调度,也是reactor的核心
// 当IO事件就绪后,通过它会将event从struct event_io_map *io移动到struct evcallback_list *activequeues中
res = evsel->dispatch(base, tv_p);
if (res == -1) {
event_debug(("%s: dispatch returned unsuccessfully.",
__func__));
retval = -1;
goto done;
}
update_time_cache(base);
// 先处理timer事件
timeout_process(base);
// 在此才真正执行struct evcallback_list *activequeues中的event回调
if (N_ACTIVE_CALLBACKS(base)) {
int n = event_process_active(base);
if ((flags & EVLOOP_ONCE)
&& N_ACTIVE_CALLBACKS(base) == 0
&& n != 0)
done = 1;
} else if (flags & EVLOOP_NONBLOCK)
done = 1;
}
event_debug(("%s: asked to terminate loop.", __func__));
done:
clear_time_cache(base);
base->running_loop = 0;
EVBASE_RELEASE_LOCK(base, th_base_lock);
return (retval);
}
以上就是整个libevent的eventloop事件循环主逻辑。
4.3 event事件派发
event_base也是libevent之reactor的核心组件之一,用来做event事件的管理和调度,每个 event_base 代表一个事件分发器,包含事件循环的所有必要数据结构。
event_base 提供了一个事件分发机制,所有事件(如 I/O、信号、定时器等)都要注册到一个 event_base 中。
它通过调用底层的事件多路复用机制(如 select、epoll、kqueue 等)来监听多个文件描述符或信号,并在事件触发时调用相应的回调函数。
// 此为reactor的核心,用于libevent中IO事件&signal事件&timer事件的分派
struct event_base {
/** Function pointers and other data to describe this event_base's
* backend. */
// 对诸如select / poll / epoll / kqueue / devpoll等系统调用操作的抽象
// 通过它可向系统调用注册&收集&处理&取消注册等各平台多路复用的共有操作
// 此为预编译时根据configure配置选项所决定使用backend哪一个多路复用
const struct eventop *evsel;
/** Pointer to backend-specific data. */
// 多路复用init操作时,会malloc1个对应的epollop实例,即多路复用的上下文,后续诸如epoll_wait时所需
// 此eventop有别于上面的eventop,系多路复用上下文实例
void *evbase;
/** List of changes to tell backend about at next dispatch. Only used
* by the O(1) backends. */
// 用于epoll等O(1)事件复杂度的多路复用,用于收集事件变化
// 此为epoll和kqueue所需
struct event_changelist changelist;
/** Function pointers used to describe the backend that this event_base
* uses for signals */
// libevent统一了IO/signal/timer事件的处理,此为统一signal事件处理
const struct eventop *evsigsel;
/** Data to implement the common signal handelr code. */
// 与evbase类似,系signal相关的上下文
struct evsig_info sig;
/** Number of virtual events */
int virtual_event_count;
/** Maximum number of virtual events active */
int virtual_event_count_max;
/** Number of total events added to this event_base */
// 实际的已关注事件数
int event_count;
/** Maximum number of total events added to this event_base */
// 支持关注的最大事件数
int event_count_max;
/** Number of total events active in this event_base */
// 有多少个event就绪了
int event_count_active;
/** Maximum number of total events active in this event_base */
// 当前event_base最大的active事件数
int event_count_active_max;
/** Set if we should terminate the loop once we're done processing
* events. */
// 让event_base_dispatch事件循环优雅的退出
int event_gotterm;
/** Set if we should terminate the loop immediately */
// 让event_base_dispatch事件循环立马退出
int event_break;
/** Set if we should start a new instance of the loop immediately. */
// 启动1个新的事件循环
int event_continue;
/** The currently running priority of events */
// libevent在对event处理的时候,是按优先级进行的,此表示当前运行event的优先级
int event_running_priority;
/** Set if we're running the event_base_loop function, to prevent
* reentrant invocation. */
// 表示event_base_dispatch事件循环是否已在运行
int running_loop;
/** Set to the number of deferred_cbs we've made 'active' in the
* loop. This is a hack to prevent starvation; it would be smarter
* to just use event_config_set_max_dispatch_interval's max_callbacks
* feature */
// 延迟回调队列的大小
int n_deferreds_queued;
/* Active event management. */
/** An array of nactivequeues queues for active event_callbacks (ones
* that have triggered, and whose callbacks need to be called). Low
* priority numbers are more important, and stall higher ones.
*/
// 此为event_base_dispatch事件循环当前cycle需要执行的callback队列
struct evcallback_list *activequeues;
/** The length of the activequeues array */
// 此为event_base_dispatch事件循环当前cycle需要执行的callback队列
int nactivequeues;
/** A list of event_callbacks that should become active the next time
* we process events, but not this time. */
// 此为event_base_dispatch事件循环下次cycle需要执行的callback队列
struct evcallback_list active_later_queue;
/* common timeout logic */
/** An array of common_timeout_list* for all of the common timeout
* values we know. */
// 以下3个字段,是处理common timeout相关逻辑的
// 此timeout有别于常见的timer,常见的timer是用小根堆来管理的
// 此为common timeout的队列
struct common_timeout_list **common_timeout_queues;
/** The number of entries used in common_timeout_queues */
// 表示以上common timeout队列大小
int n_common_timeouts;
/** The total size of common_timeout_queues. */
// 已分配的common timeout大小
int n_common_timeouts_allocated;
/** Mapping from file descriptors to enabled (added) events */
// 此为管理所有IO事件及其callback的,即IO事件的所有event实例(已注册的所有event)
struct event_io_map io;
/** Mapping from signal numbers to enabled (added) events. */
// 此为管理所有signal事件及其callback的,即signal时间的所有event实例(已注册的所有signal event)
struct event_signal_map sigmap;
/** Priority queue of events with timeouts. */
// 此为管理所有常见timer的事件及callback的,小根堆
struct min_heap timeheap;
/** Stored timeval: used to avoid calling gettimeofday/clock_gettime
* too often. */
// 此为频繁调用gettimeofday/clock_gettime之缓存优化
struct timeval tv_cache;
struct evutil_monotonic_timer monotonic_timer;
/** Difference between internal time (maybe from clock_gettime) and
* gettimeofday. */
struct timeval tv_clock_diff;
/** Second in which we last updated tv_clock_diff, in monotonic time. */
time_t last_updated_clock_diff;
#ifndef EVENT__DISABLE_THREAD_SUPPORT
/* threading support */
/** The thread currently running the event_loop for this base */
// 当前运行event_base_dispatch事件循环的线程ID
unsigned long th_owner_id;
/** A lock to prevent conflicting accesses to this event_base */
// 多线程下对event_base的线程安全访问
void *th_base_lock;
/** A condition that gets signalled when we're done processing an
* event with waiters on it. */
void *current_event_cond;
/** Number of threads blocking on current_event_cond. */
int current_event_waiters;
#endif
/** The event whose callback is executing right now */
struct event_callback *current_event;
#ifdef _WIN32
/** IOCP support structure, if IOCP is enabled. */
// Windows平台IOCP多路复用相关
struct event_iocp_port *iocp;
#endif
/** Flags that this base was configured with */
// event_base配置flag
enum event_base_config_flag flags;
struct timeval max_dispatch_time;
int max_dispatch_callbacks;
int limit_callbacks_after_prio;
// 以下是从其他thread唤醒event_base_dispatch事件循环之用
/* Notify main thread to wake up break, etc. */
/** True if the base already has a pending notify, and we don't need
* to add any more. */
// 是否有等待循环event_base_dispatch事件循环的
int is_notify_pending;
/** A socketpair used by some th_notify functions to wake up the main
* thread. */
// 唤醒原理是利用类似socket读写来实现的
evutil_socket_t th_notify_fd[2];
/** An event used by some th_notify functions to wake up the main
* thread. */
// 唤醒是利用event_base中的event来实现关注就绪读事件的
struct event th_notify;
/** A function used to wake up the main thread from another thread. */
// 唤醒event_base_dispatch后执行的callback
int (*th_notify_fn)(struct event_base *base);
/** Saved seed for weak random number generator. Some backends use
* this to produce fairness among sockets. Protected by th_base_lock. */
// 诸如select / poll等多路复用在select返回成功,需要轮询数组检测是否确已就绪
// 用此技术来增加扫描的公平性
struct evutil_weakrand_state weakrand_seed;
/** List of event_onces that have not yet fired. */
// 只需关注一次的事件,此类event有很多,用链表描述
LIST_HEAD(once_event_list, event_once) once_events;
};
4.4 event事件处理
/*
* Active events are stored in priority queues. Lower priorities are always
* process before higher priorities. Low priority events can starve high
* priority ones.
*/
static int
event_process_active(struct event_base *base)
{
/* Caller must hold th_base_lock */
struct evcallback_list *activeq = NULL;
int i, c = 0;
const struct timeval *endtime;
struct timeval tv;
const int maxcb = base->max_dispatch_callbacks;
const int limit_after_prio = base->limit_callbacks_after_prio;
if (base->max_dispatch_time.tv_sec >= 0) {
update_time_cache(base);
gettime(base, &tv);
evutil_timeradd(&base->max_dispatch_time, &tv, &tv);
endtime = &tv;
} else {
endtime = NULL;
}
for (i = 0; i < base->nactivequeues; ++i) {
if (TAILQ_FIRST(&base->activequeues[i]) != NULL) {
base->event_running_priority = i;
activeq = &base->activequeues[i];
if (i < limit_after_prio)
c = event_process_active_single_queue(base, activeq,
INT_MAX, NULL);
else
c = event_process_active_single_queue(base, activeq,
maxcb, endtime);
if (c < 0) {
goto done;
} else if (c > 0)
break; /* Processed a real event; do not
* consider lower-priority events */
/* If we get here, all of the events we processed
* were internal. Continue. */
}
}
done:
base->event_running_priority = -1;
return c;
}
/*
Helper for event_process_active to process all the events in a single queue,
releasing the lock as we go. This function requires that the lock be held
when it's invoked. Returns -1 if we get a signal or an event_break that
means we should stop processing any active events now. Otherwise returns
the number of non-internal event_callbacks that we processed.
*/
static int
event_process_active_single_queue(struct event_base *base,
struct evcallback_list *activeq,
int max_to_process, const struct timeval *endtime)
{
struct event_callback *evcb;
int count = 0;
EVUTIL_ASSERT(activeq != NULL);
// 此处即是libevent最终处理event回调的地方
for (evcb = TAILQ_FIRST(activeq); evcb; evcb = TAILQ_FIRST(activeq)) {
struct event *ev=NULL;
if (evcb->evcb_flags & EVLIST_INIT) {
ev = event_callback_to_event(evcb);
if (ev->ev_events & EV_PERSIST || ev->ev_flags & EVLIST_FINALIZING)
event_queue_remove_active(base, evcb);
else
event_del_nolock_(ev, EVENT_DEL_NOBLOCK);
event_debug((
"event_process_active: event: %p, %s%s%scall %p",
ev,
ev->ev_res & EV_READ ? "EV_READ " : " ",
ev->ev_res & EV_WRITE ? "EV_WRITE " : " ",
ev->ev_res & EV_CLOSED ? "EV_CLOSED " : " ",
ev->ev_callback));
} else {
event_queue_remove_active(base, evcb);
event_debug(("event_process_active: event_callback %p, "
"closure %d, call %p",
evcb, evcb->evcb_closure, evcb->evcb_cb_union.evcb_callback));
}
if (!(evcb->evcb_flags & EVLIST_INTERNAL))
++count;
base->current_event = evcb;
#ifndef EVENT__DISABLE_THREAD_SUPPORT
base->current_event_waiters = 0;
#endif
// 此为event回调的类型
switch (evcb->evcb_closure) {
case EV_CLOSURE_EVENT_SIGNAL:
EVUTIL_ASSERT(ev != NULL);
event_signal_closure(base, ev);
break;
case EV_CLOSURE_EVENT_PERSIST:
EVUTIL_ASSERT(ev != NULL);
event_persist_closure(base, ev);
break;
case EV_CLOSURE_EVENT: {
void (*evcb_callback)(evutil_socket_t, short, void *);
EVUTIL_ASSERT(ev != NULL);
evcb_callback = *ev->ev_callback;
EVBASE_RELEASE_LOCK(base, th_base_lock);
evcb_callback(ev->ev_fd, ev->ev_res, ev->ev_arg);
}
break;
case EV_CLOSURE_CB_SELF: {
void (*evcb_selfcb)(struct event_callback *, void *) = evcb->evcb_cb_union.evcb_selfcb;
EVBASE_RELEASE_LOCK(base, th_base_lock);
evcb_selfcb(evcb, evcb->evcb_arg);
}
break;
case EV_CLOSURE_EVENT_FINALIZE:
case EV_CLOSURE_EVENT_FINALIZE_FREE: {
void (*evcb_evfinalize)(struct event *, void *);
int evcb_closure = evcb->evcb_closure;
EVUTIL_ASSERT(ev != NULL);
base->current_event = NULL;
evcb_evfinalize = ev->ev_evcallback.evcb_cb_union.evcb_evfinalize;
EVUTIL_ASSERT((evcb->evcb_flags & EVLIST_FINALIZING));
EVBASE_RELEASE_LOCK(base, th_base_lock);
evcb_evfinalize(ev, ev->ev_arg);
event_debug_note_teardown_(ev);
if (evcb_closure == EV_CLOSURE_EVENT_FINALIZE_FREE)
mm_free(ev);
}
break;
case EV_CLOSURE_CB_FINALIZE: {
void (*evcb_cbfinalize)(struct event_callback *, void *) = evcb->evcb_cb_union.evcb_cbfinalize;
base->current_event = NULL;
EVUTIL_ASSERT((evcb->evcb_flags & EVLIST_FINALIZING));
EVBASE_RELEASE_LOCK(base, th_base_lock);
evcb_cbfinalize(evcb, evcb->evcb_arg);
}
break;
default:
EVUTIL_ASSERT(0);
}
EVBASE_ACQUIRE_LOCK(base, th_base_lock);
base->current_event = NULL;
#ifndef EVENT__DISABLE_THREAD_SUPPORT
if (base->current_event_waiters) {
base->current_event_waiters = 0;
EVTHREAD_COND_BROADCAST(base->current_event_cond);
}
#endif
if (base->event_break)
return -1;
// 一次性最多执行max_to_process个event回调
if (count >= max_to_process)
return count;
// 更新缓存时间
if (count && endtime) {
struct timeval now;
update_time_cache(base);
gettime(base, &now);
if (evutil_timercmp(&now, endtime, >=))
return count;
}
if (base->event_continue)
break;
}
return count;
}
5 参考文献
5.1 reactor wiki
https://en.wikipedia.org/wiki/Reactor_pattern#Structure