moduo网络库的reactor模式(中)

本文详细介绍了moduo网络库中reactor模式的核心——I/O多路复用与事件分发。重点讨论了Channel类如何封装文件描述符实现事件分发,以及Poller类如何利用poll(2)实现I/O多路复用。此外,还提到了测试部分,包括timerfd(2)的使用和利用eventfd(2)实现用户唤醒线程的功能。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

moduo网络库的reactor模式基本构成为“non-blocking I/O + I/O multiplexing”,程序的基本结构是一个事件循环(event loop),以事件驱动(event-driven)和事件回调(event callback)的方式实现业务逻辑。在moduo网络库的reactor模式(上)理清了事件循环(event loop)的基本框架,在此基础上此文加上事件驱动(event-driven)和事件回调(event callback)结构,即reactor最核心的I/O多路复用(I/O mutiplexing)和事件分发(dispatching)机制:进行I/O多路复用并将拿到的I/O事件分发给各个文件描述符(fd)的事件处理函数。


1、事件驱动与分发:I/O多路复用与事件分发

(1)Channel类:封装文件描述符(fd)实现事件分发

每个Channel对象都只属于某一个EventLoop,因此只属于某一个I/O线程。每个Channel对象只负责一个文件描述符的I/O事件分发,Channel会把不同的I/O事件(读、写、错误等)分发为不同的回调。Channel类就是对文件描述符的封装,构造函数Channel(EventLoop* loop, int fd)即将此Channel对象与唯一所属的EventLoop以及文件描述符(fd)绑定了起来。Channel类数据成员events_表示fd事件,用于更新I/O多路复用poll(2)。数据成员revents_表示现正要执行的fd事件,用于事件回调。在目前程序中

1)Channel::enableReading()、Channel::enableWriting()等为设置文件描述符(fd)事件的接口函数:

首先设置fd事件events_,然后执行update()将该Channel的新事件更新到I/O多路复用器poll(2)(update()是通过数据成员EventLoop* loop_,即自己所属的EventLoop对象指针调用EventLoop::updateChannel(),再调用Poller::updateChannel()间接更新poll(2)中的事件。此处疑问:为什么不直接在Channel中添加数据成员Poller指针直接更新事件到poll(2),而是要绕一圈间接更新事件?)

void enableReading() { events_ |= kReadEvent; update(); }
// void enableWriting() { events_ |= kWriteEvent; update(); }

void Channel::update()
{
  loop_->updateChannel(this);
}

2)Channel::setReadCallback(const EventCallback& cb)、Channel::setWriteCallback(const EventCallback& cb)、Channel::setErrorCallback(const EventCallback& cb)为设置fd对应事件的用户回调函数的接口函数。

void setReadCallback(const EventCallback& cb) { readCallback_ = cb; }
void setWriteCallback(const EventCallback& cb) { writeCallback_ = cb; }
void setErrorCallback(const EventCallback& cb) { errorCallback_ = cb; }

3)Channel::handleEvent()是Channel的核心,实现事件分发功能,它由EventLoop::loop()调用,它的功能是根据revents_的值分别调用不同的用户调用。而revents_则是在Poller::poll()中得以更新的。

void Channel::handleEvent()
{
  if (revents_ & POLLNVAL)
    LOG_WARN << "Channel::handle_event() POLLNVAL";

  if (revents_ & (POLLERR | POLLNVAL))
    if (errorCallback_) 
      errorCallback_();

  if (revents_ & (POLLIN | POLLPRI | POLLRDHUP))
    if (readCallback_) 
      readCallback_();

  if (revents_ & POLLOUT) 
    if (writeCallback_) 
      writeCallback_();
}

//inside of function EventLoop::loop():
while (!quit_)
{
    activeChannels_.clear();
    pollReturnTime_ = poller_->poll(kPollTimeMs, &activeChannels_);
    for (ChannelList::iterator it = activeChannels_.begin();
        it != activeChannels_.end(); ++it)
    {
        (*it)->handleEvent();
    }
    doPendingFunctors();
}

(2)Poller类:封装I/O多路复用poll(2)实现I/O多路复用

首先看系统调用poll(2)的函数原型为

#include <poll.h>
int poll(struct pollfd fd[], nfds_t nfds, int timeout);

struct pollfd的结构为

struct pollfd{

 int fd; // 文件描述符

 short event;// 请求的事件

 short revent;// 返回的事件

}

1)为poll(2)提供数据准备:

Poller封装类首先是通过Poller::updateChannel(Channel* channel)将文件描述符(fd)封装类Channel存储到数据成员std::map<int, Channel*> ChannelMap中(具体实现过程大致为:用户调用Channel::update()----->EventLoop::updateChannel()----->Poller::updateChannel())。接着则可将各个Channel对象的数据成员更新给poll(2),并在每次poll(2)后更新各个Channel的revents_。

2)封装I/O多路复用(I/O multiplexing):

Poller::poll(int timeoutMs, ChannelList* activeChannels)函数中首先进行I/O多路复用poll(2),所需参数从数据成员std::map<int, Channel*> ChannelMap中获得,阻塞直到有fd事件发生或定时时间到时,返回事件发生数量numEvents。然后继续执行内部函数Poller::fillActiveChannels(numEvents, activeChannels)将可发生fd事件对应的Channel反馈给外界,即在数据成员std::map<int, Channel*> ChannelMap中找出事件可发生的fd对应的Channel,存放到外部参数ChannelList* activeChannels中。

注意 2)中并未将I/O多路复用与事件分发合在一起,而是只实现了I/O多路复用事件分发Channel::handleEvent()则是在EventLoop::loop()中实现。一方面是程序安全方面的考虑,另一方面则是为了方便替换为其他更高效的I/O多路复用机制,如epoll(4)。

至此,一个完整的Reactor模式基本框架就完成了。


2、测试:利用timerfd(2)

有了Reactor基本框架后,我们使用timerfd给EventLoop加上一个定时器功能,对这个框架进行测试。

#include <sys/timerfd.h>

int timerfd_create(int clockid, int flags);

int timerfd_settime(int fd, int flags, const struct itimerspec *new_value, struct itimerspec *old_value);

int timerfd_gettime(int fd, struct itimerspec *curr_value);

传统的Reactor通过控制poll(2)或select(2)的等待时间来实现定时,而现在linux有了timerfd,我们可以用和处理I/O事件相同的方式来处理定时。具体原因参考文章Muduo 网络编程示例之三:定时器

#include "EventLoopThread.hpp"
#include "EventLoop.hpp"
#include "Thread.hpp"
#include <sys/timerfd.h>
#include <unistd.h>
#include <string.h>
#include <memory>
#include <iostream>

using namespace std;

EventLoop* loop;

void timeout()
{
  cout<<"tid "<<CurrentThreadtid()<<": Timeout!"<<endl;
  loop->quit();
}

int main()
{
  EventLoopThread ELThread;
  loop = ELThread.startLoop();//thread2
  int timerfd=timerfd_create(CLOCK_MONOTONIC,TFD_NONBLOCK|TFD_CLOEXEC);
  struct itimerspec howlong;
  bzero(&howlong, sizeof howlong);
  howlong.it_value.tv_sec=3;
  timerfd_settime(timerfd,0,&howlong,NULL);
  Channel channel(loop,timerfd);
  channel.setReadCallback(timeout);  
  channel.enableReading();  

  sleep(5);//ensure the main thread do not exit faster than thread2
  close(timerfd);
  return 0;
}
baddy@ubuntu:~/Documents/Reactor/s1.1$ g++ -std=c++11 -pthread -o test1 MutexLockGuard.hpp Condition.hpp Thread.hpp Thread.cpp Channel.hpp Channel.cpp EventLoop.hpp EventLoop.cpp Poller.hpp Poller.cpp EventLoopThread.hpp testEventLoopThread.cpp 
baddy@ubuntu:~/Documents/Reactor/s1.1$ /usr/bin/valgrind ./testTimerDemo 
==25681== Memcheck, a memory error detector
==25681== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==25681== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==25681== Command: ./testTimerDemo
==25681== 
tid 25681: create a new thread
tid 25681: waiting
tid 25682: Thread::func_() started!
tid 25682: notified
tid 25681: received notification
tid 25682: start looping...
tid 25682: Timeout!
tid 25682: end looping...
tid 25682: Thread end!
==25681== 
==25681== HEAP SUMMARY:
==25681==     in use at exit: 0 bytes in 0 blocks
==25681==   total heap usage: 16 allocs, 16 frees, 74,552 bytes allocated
==25681== 
==25681== All heap blocks were freed -- no leaks are possible
==25681== 
==25681== For counts of detected and suppressed errors, rerun with: -v
==25681== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

muduo将timerfd封装到channel,此流程和普通文件描述符的处理过程一致,也是将timerfd放入到I/O多路复用,其实现思路类似下一小节。然后创建TimerQueue类实现定时器功能。此处省略。


3、用户唤醒线程(扩展功能):利用eventfd(2)

由于I/O线程平时阻塞在事件循环EventLoop::loop()的poll(2)调用中,为了让I/O线程能立刻执行用户回调,我们需要设法唤醒它。这里用到了eventfd(2)。该函数返回一个文件描述符,类似于其他的文件描述符操作,可以对该描述符进行一系列的操作,如读、写、poll、select等,当然这里我们仅仅考虑read、write。

#include <sys/eventfd.h>

 int eventfd(unsigned int initval, int flags);

程序中,在EventLoop构造函数中则已完成构造eventfd保存到EventLoop::wakeupFd_、封装成Channel保存到EventLoop::wakeupChannel_、更新eventfd的读事件到I/O多路复用poll(2)中。当poll(2)响应eventfd的读事件(用户触发)时回调EventLoop::handleRead()函数(eventfd读操作)读取wakeup()(eventfd写操作)时所写数据。此过程即为唤醒。注意此流程和普通文件描述符的处理过程一致,也是将唤醒操作也放入到I/O多路复用。

void EventLoop::handleRead()
{
  uint64_t one = 1;
  ssize_t n = ::read(wakeupFd_, &one, sizeof one);
  if (n != sizeof one)
  {
    LOG_ERROR << "EventLoop::handleRead() reads " << n << " bytes instead of 8";
  }
}
void EventLoop::wakeup()
{
  uint64_t one = 1;
  ssize_t n = ::write(wakeupFd_, &one, sizeof one);
  if (n != sizeof one)
  {
    LOG_ERROR << "EventLoop::wakeup() writes " << n << " bytes instead of 8";
  }
}

在目前程序中,EventLoop::runInLoop(const Functor& cb)为接口函数,供用户调用,在EventLoop的I/O线程内执行某个用户任务回调。注意这个接口函数EventLoop::runInLoop(const Functor& cb)允许跨线程使用,这就带来了线程安全性方面的问题,此处的解决办法不是加锁,而是把对回调函数cb()的操作转移到I/O线程来进行:

(1)如果用户在当前I/O线程调用这个函数,回调会同步进行(即执行cb();)。

(2)如果用户在其他线程调用这个函数,cb会被加入队列,I/O线程会被唤醒来调用这个回调函数(即执行EventLoop::queueInLoop(cb);)。有了这个功能,我们就能轻易地在线程间调配任务,这样可以在不用锁的情况下保证线程安全性。

void EventLoop::runInLoop(const Functor& cb)
{
    if (isInLoopThread())
        cb();
    else
        queueInLoop(cb);
}

在上述(2)情况下,EventLoop::queueInLoop(cb)函数内部首先将回调函数指针cb保存到函数队列std::vector<Functor> pendingFunctors_中,然后执行wakeup()(eventfd写操作)唤醒所属I/O线程。线程被唤醒之后则会从I/O多路复用poll(2)中返回,执行后续的EventLoop::doPendingFunctors();遍历函数队列执行函数。

void EventLoop::queueInLoop(const Functor& cb)
{
    {
        MutexLockGuard lock(mutex_);
        pendingFunctors_.push_back(cb);
    }

    if (!isInLoopThread() || callingPendingFunctors_)
        wakeup();
}

//inside of EventLoop::loop()
while (!quit_)
{
    activeChannels_.clear();
    pollReturnTime_ = poller_->poll(kPollTimeMs, &activeChannels_);
    for (ChannelList::iterator it = activeChannels_.begin();
        it != activeChannels_.end(); ++it)
    {
        (*it)->handleEvent();
    }
    doPendingFunctors();
}

void EventLoop::doPendingFunctors()
{
  std::vector<Functor> functors;
  callingPendingFunctors_ = true;

  {
    MutexLockGuard lock(mutex_);
    functors.swap(pendingFunctors_);
  }

  for (size_t i = 0; i < functors.size(); ++i)
  {
    functors[i]();
  }
  callingPendingFunctors_ = false;
}

值得学习的地方是在EventLoop::doPendingFunctors()函数中,不是简单地在临界区内依次调用Functor,而是把回调队列swap()到局部变量functors中,既减小了临界区的长度,也避免了死锁,同时也对 EventLoop::pendingFunctors_进行了清空。

下面讨论一个很重要的效率问题

在上述while循环里面,poll(2)函数一般设置了一个超时时间。如果设置超时时间为0,那么在没有任何网络IO时间和其他任务处理的情况下,这些工作线程实际上会空转,白白地浪费cpu时间片。如果设置的超时时间大于0,在没有网络IO时间的情况,poll(2)仍然要挂起指定时间才能返回,导致doPendingFunctors()不能及时执行,影响其他任务不能及时处理,也就是说其他任务一旦产生,其处理起来具有一定的延时性。这样也不好。

那如何解决该问题呢?

其实我们想达到的效果是,如果没有网络IO时间和其他任务要处理,那么这些工作线程最好直接挂起而不是空转;如果有其他任务要处理,这些工作线程要立刻能处理这些任务而不是在poll(2)挂起指定时间后才开始处理这些任务。

我们采取如下方法来解决该问题,以linux为例,不管poll_fd上有没有文件描述符fd,我们都给它绑定一个默认的fd,这个fd被称为唤醒fd。当我们需要处理其他任务的时候,向这个唤醒fd上随便写入1个字节的,这样这个fd立即就变成可读的了,poll(2)函数立即被唤醒并返回,接下来马上就能执行doPendingFunctors(),使得其他任务得到处理。反之,没有其他任务也没有网络IO事件时,poll(2)就挂在那里什么也不做。

(1)空转:

Int timeout=0;

while(!quit){

          Poll(timeout);

          handle_other_thing();

}

如果设置超时时间为0,那么在没有任何网络IO时间和其他任务处理的情况下,这些工作线程实际上会空转(即一直执行while判断, cpu一直执行NOP),白白地浪费cpu时间片。

(2)挂起:

如果设置的超时时间大于0,在没有网络IO时间的情况,epoll_wait/poll/select仍然要挂起指定时间才能返回,导致handle_other_thing()不能及时执行,影响其他任务不能及时处理。

挂起:一般是主动的,由系统或程序发出,甚至于辅存中去。(不释放CPU,可能释放内存,放在外存)

阻塞:一般是被动的,在抢占资源中得不到资源,被动的挂起在内存,等待某种资源或信号量(即有了资源)将他唤醒。(释放CPU,不释放内存)

操作系统为什么要引入挂起状态?挂起状态涉及到中级调度,因为当内存中的某个程序需要大的内存空间来执行,但这时内存有没有空余空间了,那么操作系统就回根据调度算法把一些进程放到外存中去,以腾出空间给正在执行的程序的数据和程序,所以引入了挂起状态。


3、程序测试

测试文件test.cpp

#include "EventLoopThread.hpp"
#include "EventLoop.hpp"
#include "Thread.hpp"
#include <iostream>
#include <memory>

using namespace std;

void test()
{
  cout<<"tid "<<CurrentThreadtid()<<": runInLoop..."<<endl;
}

int main()
{
  cout<<"Main: pid: "<<getpid()<<" tid: "<<CurrentThreadtid()<<endl;//main thread
  //sleep(1);

  EventLoopThread ELThread1;
  EventLoop* loop1 = ELThread1.startLoop();//thread 2
  sleep(1);
  loop1->runInLoop(test);
  
  EventLoopThread ELThread2;
  EventLoop* loop2 = ELThread2.startLoop();//thread 3
  sleep(1);
  loop2->runInLoop(test);

  loop1->loop(); //test "one thread one loop"
  loop2->loop(); //test "one thread one loop"
  
  sleep(1);
  //loop1->quit();
  loop1->runInLoop(bind(&EventLoop::quit,loop1));
  //loop2->quit();
  loop2->runInLoop(bind(&EventLoop::quit,loop2));
  sleep(1);

  return 0;
}
baddy@ubuntu:~/Documents/Reactor/s1.1$ g++ -std=c++11 -pthread -o test MutexLockGuard.hpp Condition.hpp Thread.hpp Thread.cpp Channel.hpp Channel.cpp EventLoop.hpp EventLoop.cpp Poller.hpp Poller.cpp EventLoopThread.hpp testEventLoopThread.cpp 
baddy@ubuntu:~/Documents/Reactor/s1.1$ which valgrind
/usr/bin/valgrind
baddy@ubuntu:~/Documents/Reactor/s1.1$ /usr/bin/valgrind ./test
==22825== Memcheck, a memory error detector
==22825== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==22825== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==22825== Command: ./test
==22825== 
Main: pid: 22825 tid: 22825
tid 22825: create a new thread
tid 22825: waiting
tid 22826: Thread::func_() started!
tid 22826: notified
tid 22825: received notification
tid 22826: start looping...
tid 22825: create a new thread
tid 22826: runInLoop...
tid 22825: waiting
tid 22827: Thread::func_() started!
tid 22827: notified
tid 22825: received notification
tid 22827: start looping...
tid 22825: This EventLoop had been created!
tid 22825: This EventLoop had been created!
tid 22827: runInLoop...
tid 22826: end looping...
tid 22827: end looping...
tid 22827: Thread end!
tid 22826: Thread end!
==22825== 
==22825== HEAP SUMMARY:
==22825==     in use at exit: 0 bytes in 0 blocks
==22825==   total heap usage: 34 allocs, 34 frees, 75,472 bytes allocated
==22825== 
==22825== All heap blocks were freed -- no leaks are possible
==22825== 
==22825== For counts of detected and suppressed errors, rerun with: -v
==22825== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

接口函数EventLoopThread::startLoop()创建新线程开始执行EventLoop::loop()并返回所属线程的EventLoop对象的指针loop*,供用户调用。如用户用loop*调用接口函数EventLoop::runInLoop(Functor cb),用户可在EventLoop所属I/O线程内调用回调函数cb()。如用户用loop*调用EventLoop::loop()会失败,因为该Reactor模式遵循“one loop per thread”。

在 moduo网络库的reactor模式(上),线程同步封装类MutexLockGuard及Condition、线程封装类Thread、事件循环封装类EventLoop及EventLoopThread都已完成,实现了事件循环框架。本文中则再添上Channel类、Poller类以及扩展功能后的EventLoop类,实现事件驱动和分发功能。在上一章中,我是把类的声明和实现都放在一起,导致在编译时出现重复定义问题。所以需要养成良好的编程习惯,将类的声明放在头文件.h,而将类的实现放在.c文件。全局变量或函数也需要规范,将定义放在.c文件而在.h文件使用extern关键字声明即可。

Channel类

#ifndef CHANNEL_H_
#define CHANNEL_H_

#include <functional>
#include <poll.h>

/// A selectable I/O channel.
///
/// This class doesn't own the file descriptor.
/// The file descriptor could be a socket,
/// an eventfd, a timerfd, or a signalfd

class EventLoop;

class Channel //: boost::noncopyable
{
 public:
  typedef std::function<void()> EventCallback;

  Channel(EventLoop* loop, int fdArg);

  void handleEvent();
  void setReadCallback(const EventCallback& cb)
  { readCallback_ = cb; }
  void setWriteCallback(const EventCallback& cb)
  { writeCallback_ = cb; }
  void setErrorCallback(const EventCallback& cb)
  { errorCallback_ = cb; }

  int fd() const { return fd_; }
  int events() const { return events_; }
  void set_revents(int revt) { revents_ = revt; }
  bool isNoneEvent() const { return events_ == kNoneEvent; }

  void enableReading() { events_ |= kReadEvent; update(); }
  // void enableWriting() { events_ |= kWriteEvent; update(); }
  // void disableWriting() { events_ &= ~kWriteEvent; update(); }
  // void disableAll() { events_ = kNoneEvent; update(); }

  // for Poller
  int index() { return index_; }
  void set_index(int idx) { index_ = idx; }

  EventLoop* ownerLoop() { return loop_; }

private:
  void update();

  static const int kNoneEvent;
  static const int kReadEvent;
  static const int kWriteEvent;

  EventLoop* loop_;
  const int  fd_;
  int        events_;
  int        revents_;
  int        index_; // used by Poller.

  EventCallback readCallback_;
  EventCallback writeCallback_;
  EventCallback errorCallback_;
};

#endif
#include "Channel.hpp"
#include "EventLoop.hpp"
#include <poll.h>

const int Channel::kNoneEvent = 0;
const int Channel::kReadEvent = POLLIN | POLLPRI;
const int Channel::kWriteEvent = POLLOUT;

Channel::Channel(EventLoop* loop, int fdArg)
  : loop_(loop),
    fd_(fdArg),
    events_(0),
    revents_(0),
    index_(-1)
{}

void Channel::update()
{
  loop_->updateChannel(this);
}
 
void Channel::handleEvent()
{
  if (revents_ & POLLNVAL) {
    //LOG_WARN << "Channel::handle_event() POLLNVAL";
  }

  if (revents_ & (POLLERR | POLLNVAL))
    if (errorCallback_) 
      errorCallback_();

  if (revents_ & (POLLIN | POLLPRI | POLLRDHUP))
    if (readCallback_) 
      readCallback_();

  if (revents_ & POLLOUT)
    if (writeCallback_) 
      writeCallback_();
}

Poller类

#ifndef POLLER_H_
#define POLLER_H_

#include <map>
#include <vector>

/// IO Multiplexing with poll(2).
///
/// This class doesn't own the Channel objects.

struct pollfd;
class Channel;
class EventLoop;

class Poller //: boost::noncopyable
{
 public:
  typedef std::vector<Channel*> ChannelList;

  Poller(EventLoop* loop);
  ~Poller();

  /// Polls the I/O events.
  /// Must be called in the loop thread.
  void poll(int timeoutMs, ChannelList* activeChannels);

  /// Changes the interested I/O events.
  /// Must be called in the loop thread.
  void updateChannel(Channel* channel);

  void assertInLoopThread();

 private:
  void fillActiveChannels(int numEvents,
                          ChannelList* activeChannels) const;

  typedef std::vector<struct pollfd> PollFdList;
  typedef std::map<int, Channel*> ChannelMap;

  EventLoop* ownerLoop_;
  PollFdList pollfds_;
  ChannelMap channels_;
};

#endif 
#include "Poller.hpp"
#include "Channel.hpp"
#include "EventLoop.hpp"
#include <assert.h>
#include <poll.h>

Poller::Poller(EventLoop* loop) : ownerLoop_(loop) {}
Poller::~Poller() {}

void Poller::poll(int timeoutMs, ChannelList* activeChannels)
{
  int numEvents = ::poll(&*pollfds_.begin(), pollfds_.size(), timeoutMs);
  if (numEvents > 0) {
    fillActiveChannels(numEvents, activeChannels);
  } else if (numEvents == 0) {
  } else {
  }
}

void Poller::assertInLoopThread()
{ 
 // ownerLoop_->assertInLoopThread(); 
}

void Poller::fillActiveChannels(int numEvents,
                                ChannelList* activeChannels) const
{
  for (PollFdList::const_iterator pfd = pollfds_.begin();
      pfd != pollfds_.end() && numEvents > 0; ++pfd)
  {
    if (pfd->revents > 0)
    {
      --numEvents;
      ChannelMap::const_iterator ch = channels_.find(pfd->fd);
      //assert(ch != channels_.end());
      Channel* channel = ch->second;
      //assert(channel->fd() == pfd->fd);
      channel->set_revents(pfd->revents);
      // pfd->revents = 0;
      activeChannels->push_back(channel);
    }
  }
}

void Poller::updateChannel(Channel* channel)
{
  //assertInLoopThread();
  //LOG_TRACE << "fd = " << channel->fd() << " events = " << channel->events();
  if (channel->index() < 0) {
    // a new one, add to pollfds_
    //assert(channels_.find(channel->fd()) == channels_.end());
    struct pollfd pfd;
    pfd.fd = channel->fd();
    pfd.events = static_cast<short>(channel->events());
    pfd.revents = 0;
    pollfds_.push_back(pfd);
    int idx = static_cast<int>(pollfds_.size())-1;
    channel->set_index(idx);
    channels_[pfd.fd] = channel;
  } else {
    // update existing one
    //assert(channels_.find(channel->fd()) != channels_.end());
    //assert(channels_[channel->fd()] == channel);
    int idx = channel->index();
    //assert(0 <= idx && idx < static_cast<int>(pollfds_.size()));
    struct pollfd& pfd = pollfds_[idx];
    //assert(pfd.fd == channel->fd() || pfd.fd == -1);
    pfd.events = static_cast<short>(channel->events());
    pfd.revents = 0;
    if (channel->isNoneEvent()) {
      // ignore this pollfd
      pfd.fd = -1;
    }
  }
}

EventLoop类

#include <poll.h>
#include <unistd.h>
#include <functional>
#include <memory>
#include <vector>
#include <iostream>
#include <sys/syscall.h>
#include "Thread.hpp"
#include "Channel.hpp"
#include "MutexLockGuard.hpp"
#include "Poller.hpp"

class EventLoop{
public:
    typedef std::function<void()> Functor;
    EventLoop(); 
    ~EventLoop(); 
    
    bool isInLoopThread() const { return threadId_==CurrentThreadtid(); }
    void loop();
    void quit();
    void runInLoop(const Functor& cb);
  /// Queues callback in the loop thread.
  /// Runs after finish pooling.
  /// Safe to call from other threads.
    void queueInLoop(const Functor& cb);
  // internal use only
    void wakeup();
    void updateChannel(Channel* channel);

private:
    void handleRead();  // waked up
    void doPendingFunctors();
    typedef std::vector<Channel*> ChannelList;
    bool looping_; /* atomic */
    bool quit_; /* atomic */
    bool callingPendingFunctors_; /* atomic */
    const pid_t threadId_;
    //Timestamp pollReturnTime_;
    std::unique_ptr<Poller> poller_;
    //std::unique_ptr<TimerQueue> timerQueue_;
    int wakeupFd_;
    // unlike in TimerQueue, which is an internal class,
    // we don't expose Channel to client.
    std::unique_ptr<Channel> wakeupChannel_;
    ChannelList activeChannels_;
    MutexLock mutex_;
    std::vector<Functor> pendingFunctors_; // @GuardedBy mutex_
};

#endif
#include "EventLoop.hpp"
#include "Channel.hpp"
#include "Poller.hpp"
#include "Thread.hpp"
#include "MutexLockGuard.hpp"

#include <thread>
#include <poll.h>
#include <unistd.h>
#include <functional>
#include <memory>
#include <vector>
#include <iostream>
#include <sys/syscall.h>
#include <sys/eventfd.h>

const int kPollTimeMs = 10000;

static int createEventfd()
{
  int evtfd = ::eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
  if (evtfd < 0)
  {
    //LOG_SYSERR << "Failed in eventfd";
    //abort();
  }
  return evtfd;
}

EventLoop::EventLoop()
  : looping_(false),
    quit_(false),
    callingPendingFunctors_(false),
    threadId_(CurrentThreadtid()),
    poller_(new Poller(this)),
    //timerQueue_(new TimerQueue(this)),
    wakeupFd_(createEventfd()),
    wakeupChannel_(new Channel(this, wakeupFd_))
{
  wakeupChannel_->setReadCallback(
      std::bind(&EventLoop::handleRead, this));
  // we are always reading the wakeupfd
  wakeupChannel_->enableReading();
}

EventLoop::~EventLoop()
{
  //assert(!looping_);
  close(wakeupFd_);
}

void EventLoop::loop()
{
  if( !isInLoopThread() ){
      std::cout<<"tid "<<CurrentThreadtid()<<": This EventLoop had been created!"<<std::endl;
  }else{
    std::cout<<"tid "<<CurrentThreadtid()<<": start looping..."<<std::endl;
    quit_=false; 
    while (!quit_)
    {
       activeChannels_.clear();
       poller_->poll(kPollTimeMs, &activeChannels_);
       for (ChannelList::iterator it = activeChannels_.begin();
           it != activeChannels_.end(); ++it){
         (*it)->handleEvent();
       }
       doPendingFunctors();
    }
    std::cout<<"tid "<<CurrentThreadtid()<<": end looping..."<<std::endl;
  }
}

void EventLoop::quit()
{
  quit_ = true;
  if (!isInLoopThread())
  {
    wakeup();
  }
}

void EventLoop::runInLoop(const Functor& cb)
{
  if (isInLoopThread())
  {
    cb();
  }
  else
  {
    queueInLoop(cb);
  }
}

void EventLoop::queueInLoop(const Functor& cb)
{
  {
  MutexLockGuard lock(mutex_);
  pendingFunctors_.push_back(cb);
  }

  if (!isInLoopThread() || callingPendingFunctors_)
  {
    wakeup();
  }
}

void EventLoop::updateChannel(Channel* channel)
{
  //assert(channel->ownerLoop() == this);
  //assertInLoopThread();
  poller_->updateChannel(channel);
}

void EventLoop::wakeup()
{
  uint64_t one = 1;
  ssize_t n = write(wakeupFd_, &one, sizeof one);
  if (n != sizeof one)
  {
    //LOG_ERROR << "EventLoop::wakeup() writes " << n << " bytes instead of 8";
  }
}

void EventLoop::handleRead()
{
  uint64_t one = 1;
  ssize_t n = read(wakeupFd_, &one, sizeof one);
  if (n != sizeof one)
  {
    //LOG_ERROR << "EventLoop::handleRead() reads " << n << " bytes instead of 8";
  }
}

void EventLoop::doPendingFunctors()
{
  std::vector<Functor> functors;
  callingPendingFunctors_ = true;

  {
  MutexLockGuard lock(mutex_);
  functors.swap(pendingFunctors_);
  }

  for (size_t i = 0; i < functors.size(); ++i)
  {
    functors[i]();
  }
  callingPendingFunctors_ = false;
}

至于其他类,此处只是将在它们在上一章的程序从.h文件分开到.h声明和.c实现:

Thread类

#ifndef THREAD_H_
#define THREAD_H_

#include <thread>
#include <memory>
#include <functional>
#include <string>
#include <iostream>
#include <unistd.h>
#include <sys/syscall.h>

//global variable and function
extern __thread pid_t t_cachedTid;
extern pid_t gettid();
extern pid_t CurrentThreadtid();

class Thread
{
public:
    //typedef void* (*ThreadFunc)(void*);
    typedef std::function<void ()> ThreadFunc;
    Thread(ThreadFunc func); 
    ~Thread();
    void start();

private:
    ThreadFunc func_;
    std::string name_;
    pid_t tid_;
    pthread_t tidp_;
};

#endif
#include "Thread.hpp"

#include <thread>
#include <memory>
#include <functional>
#include <string>
#include <unistd.h>
#include <iostream>
#include <sys/syscall.h>

namespace detail
{

struct ThreadData
{
    typedef std::function<void ()> ThreadFunc;
    ThreadFunc func_;
    std::string name_;
    pid_t tid_;

    ThreadData(ThreadFunc func, const std::string& name, pid_t tid)
      : func_(std::move(func)),
        name_(name),
        tid_(tid)
    { }

    void runInThread()
    {
        tid_ = CurrentThreadtid();
        std::cout<<"tid "<<CurrentThreadtid()<<": Thread::func_() started!"<<std::endl;
        func_();
        name_=std::string("finished");
    }
};

void* startThread(void* obj)
{
  ThreadData* data = static_cast<ThreadData*>(obj);
  data->runInThread();
  delete data;
  std::cout<<"tid "<<CurrentThreadtid()<<": Thread end!"<<std::endl;
  return NULL;
}
 
}
pid_t gettid()
{
  return static_cast<pid_t>(syscall(SYS_gettid));
}

__thread pid_t t_cachedTid = 0;

pid_t CurrentThreadtid()
{
  if (t_cachedTid == 0)
  {
    t_cachedTid = gettid();
  }
  return t_cachedTid;
}


Thread::Thread(ThreadFunc func) : func_(func), tidp_(0) {}
Thread::~Thread()
{
    pthread_detach(tidp_);//let system itself recovers the resources or it will cause memory leak! 
}
    
void Thread::start()
{
   detail::ThreadData* data = new detail::ThreadData(func_, name_, tid_);
   std::cout<<"tid "<<CurrentThreadtid()<<": create a new thread"<<std::endl;
   if(pthread_create(&tidp_, NULL, &detail::startThread, data))
   {
       delete data;
       std::cout<<"thread create error"<<std::endl;
   }
}

MutexLockGuard类

#ifndef MUTEXLOCKGUARD_H_
#define MUTEXLOCKGUARD_H_

#include <pthread.h>
#include "Thread.hpp"
#include <iostream>

class MutexLock //: boost::noncopyable
{
private:
    pthread_mutex_t mutex_;
    MutexLock(const MutexLock&);
    MutexLock& operator=(const MutexLock&);

public:
    MutexLock()
    {
        pthread_mutex_init(&mutex_,NULL);
        //std::cout<<"MutexLock create!"<<std::endl;
    }
    ~MutexLock()
    {
        pthread_mutex_destroy(&mutex_);
        //std::cout<<"MutexLock destroy!"<<std::endl;
    }
    void lock() { pthread_mutex_lock(&mutex_); }
    void unlock() { pthread_mutex_unlock(&mutex_); }
    pthread_mutex_t* getPthreadMutex() { return &mutex_; }
};

class MutexLockGuard //: boost::noncopyable
{
private:
    MutexLock& mutex_; //???此处加&的目的是什么
    MutexLockGuard(const MutexLockGuard&);
    MutexLockGuard& operator=(const MutexLockGuard&);

public:
    explicit MutexLockGuard( MutexLock& mutex )
      : mutex_(mutex)
    {
        mutex_.lock();
        //std::cout<<"tid "<<CurrentThreadtid()<<": MutexLockGuard lock!"<<std::endl;
    }
    ~MutexLockGuard()
    {
        mutex_.unlock();
        //std::cout<<"tid "<<CurrentThreadtid()<<": MutexLockGuard unlock!"<<std::endl;
    }
};

#define MutexLockGuard(x) static_assert(false, "missing mutex guard var name")
//C++0x中引入了static_assert这个关键字,用来做编译期间的断言,因此叫做静态断言。
//如果第一个参数常量表达式的值为false,会产生一条编译错误,错误位置就是该static_assert语句所在行,第二个参数就是错误提示字符串。

#endif // MUTEXLOCKGUARD_H_

Condition类

#ifndef CONDITION_H_
#define CONDITION_H_

#include "MutexLockGuard.hpp"
#include <pthread.h>

class Condition
{
  public:
    explicit Condition(MutexLock& mutex) : mutex_(mutex)
    {
      pthread_cond_init(&pcond_, NULL);
    }

   ~Condition()
   {
      pthread_cond_destroy(&pcond_);
   }

   void wait()
   {
      pthread_cond_wait(&pcond_, mutex_.getPthreadMutex());
   }

   void notify()
   {
      pthread_cond_signal(&pcond_);
   }

   void notifyAll()
   {
      pthread_cond_broadcast(&pcond_);
   }

  private:
    MutexLock& mutex_;
    pthread_cond_t pcond_;
};

#endif 

EventLoopThread类

#ifndef EVENT_LOOP_THREAD_H_
#define EVENT_LOOP_THREAD_H_

#include "EventLoop.hpp"
#include "Thread.hpp"
#include "MutexLockGuard.hpp"
#include "Condition.hpp"
#include <memory>
#include <iostream>

class EventLoopThread
{
public:
  EventLoopThread() 
    : loop_(NULL), exiting_(false), thread_(std::bind(&EventLoopThread::ThreadFunc, this)), mutex_(), cond_(mutex_) {}
  //~EventLoopThread();
  EventLoop* startLoop();
  
private:
  void ThreadFunc();

  EventLoop* loop_; 
  bool exiting_;
  Thread thread_; 
  MutexLock mutex_;
  Condition cond_;
};

EventLoop* EventLoopThread::startLoop()
{
  //assert(!thread_.started());
  thread_.start();
  
  {
    MutexLockGuard lock(mutex_);
    while (loop_ == NULL)
    {
      std::cout<<"tid "<<CurrentThreadtid()<<": waiting"<<std::endl;
      cond_.wait();
    }
    std::cout<<"tid "<<CurrentThreadtid()<<": received notification"<<std::endl;
  }
  return loop_;
}

void EventLoopThread::ThreadFunc()
{
    EventLoop loop;

    {
      MutexLockGuard lock(mutex_);
      loop_ = &loop;
      cond_.notify();
      std::cout<<"tid "<<CurrentThreadtid()<<": notified"<<std::endl;
    }

    loop.loop();
    //assert(exiting_);
}

#endif

参考资料

https://github.com/chenshuo/muduo

### 使用CMake生成名为`moduo`的动态库 要通过CMake生成一个名为`moduo`的动态库,可以按照以下方法操作。以下是详细的说明以及实现方式: #### 配置CMakeLists.txt文件 在项目根目录下创建或编辑 `CMakeLists.txt` 文件,定义目标动态库及其源文件列表。 ```cmake # 定义最低支持的 CMake 版本 cmake_minimum_required(VERSION 3.10) # 设置项目名称和语言 project(moduo LANGUAGES CXX) # 添加动态库的目标名 add_library(moduo SHARED src/moduo.cpp src/utils.cpp) # 如果有额外依赖项,可以通过 target_link_libraries 进行链接 target_link_libraries(moduo PRIVATE other_dependency) ``` 上述代码片段中,`SHARED` 关键字用于指定生成的是共享库(即动态库)。如果需要设置特定平台下的后缀或者前缀,还可以使用如下命令[^1]: ```cmake set_target_properties(moduo PROPERTIES PREFIX "lib" SUFFIX ".so") # Linux 平台示例 ``` #### 检查CMake版本兼容性 为了确保使用的CMake版本满足需求,在构建之前应验证其版本号。可通过运行以下命令来确认当前环境中的CMake版本是否合适: ```bash cmake --version ``` 当发现现有工具链不匹配时,可依据提示更新至推荐版本 (如引用提到的3.10.2)[^4] 或者手动调整配置以适应较低版本的功能限制。 #### 解决可能遇到的问题 - **错误消息:“无法找到对应于Ninja的构建程序”** 此类问题通常源于未正确安装或配置所需的构建系统(Ninja)。解决办法之一是在 Android Studio 的 Gradle 脚本里明确定义所期望采用的具体版本号 : ```gradle android { ... externalNativeBuild { cmake { version "3.10.2" } } } ``` - **缓存冲突引起的报错** 当切换工作区路径而保留旧有的编译产物时可能会触发类似于“source xxx 不同于 yyy”的警告信息 [^3]. 清理原有构建数据后再重新执行一次完整的初始化过程即可规避此类状况发生. #### 构建流程概述 完成以上准备工作之后, 执行标准的三步法来进行实际建造动作: 1. 创建专门存放中间结果的工作子目录. 2. 利用 cmake 命令加载顶层描述文档并解析成内部表示形式. 3. 启动真正的组装进程, 生产出最终制品. 具体指令序列如下所示: ```bash mkdir build && cd build cmake .. make ``` 这样就能成功利用CMake框架打造出预期命名模式下的.so类型的导出组件了。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值