多线程网络库muduo阅读笔记

本文详细介绍了陈硕的多线程网络库muduo,包括其基础类库、网络编程库以及事件管理器等关键组件。muduo提供了一系列高效且易于使用的类库,旨在简化网络编程和多线程处理。文章深入探讨了异步日志记录、线程内数据管理、单例类设计、线程池、Acceptor接受者、Connector连接者、Channel连接管道、EventLoop事件管理器、TcpConnection链接会话管理、TcpServer服务器以及TcpClient客户端等核心功能,同时分析了muduo如何通过这些组件实现高性能的网络通信。此外,文章还展示了如何利用muduo构建复杂网络应用,以及在实际项目中的应用案例。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

上次阅读了caolan的async.js源码,这次学习陈硕的多线to程网络库muduo,github地址(https://github.com/chenshuo/muduo).


知识补充:

posix thread(http://en.wikipedia.org/wiki/POSIX_Threads)

boost library(http://www.boost.org/


基础类库(muduo/base)

1.异步日志AsyncLogging

异步日志记录类通过建立两个缓冲区,当前缓冲区进行日志的存,nextBuffer用作用于保存预留缓冲区,当当前缓冲区满的时候,切换到nextBuffer进行日志的append操作,由于跑在多线程下,当然是要加锁的

 
  
void AsyncLogging::append(const char* logline, int len)
{
  muduo::MutexLockGuard lock(mutex_);
//如果当前缓冲去没满,直接添加到当前缓冲区,否则将当前缓冲区push到写队列,同时利用move语意占有nextBuffer_的内容.
  if (currentBuffer_->avail() > len)
  {
    currentBuffer_->append(logline, len);
  }
  else
  {
    buffers_.push_back(currentBuffer_.release());
    if (nextBuffer_)
    {
      currentBuffer_ = boost::ptr_container::move(nextBuffer_);
    }
    else
    {
//实际使用中确实很难出现这种情况,只有当nextBuffer_也写满了,同时写线程队列还未处理完成,一般来说两个buffe够用了
      currentBuffer_.reset(new Buffer); // Rarely happens
    }
    currentBuffer_->append(logline, len);
//对写线程进行通知,在一个flush周期内有效,否则写线程不进行当前缓冲区的操作,而是处理buffers_队列的其它缓冲.
    cond_.notify();
  }
}
写日志线程函数:

 
void AsyncLogging::threadFunc()
{
  assert(running_ == true);
  latch_.countDown();
  LogFile output(basename_, rollSize_, false);
  BufferPtr newBuffer1(new Buffer);
  BufferPtr newBuffer2(new Buffer);
  newBuffer1->bzero();
  newBuffer2->bzero();
  BufferVector buffersToWrite;
  buffersToWrite.reserve(16);
  while (running_)
  {
    assert(newBuffer1 && newBuffer1->length() == 0);
    assert(newBuffer2 && newBuffer2->length() == 0);
    assert(buffersToWrite.empty());
    {
      muduo::MutexLockGuard lock(mutex_);
//buffers_中没有内容,就只能等待append中的信号
      if (buffers_.empty()) // unusual usage!
      {
        cond_.waitForSeconds(flushInterval_);
      }
//操作当前缓冲区,置当前缓冲区值为零时缓冲区1,利用move语意提高性能
      buffers_.push_back(currentBuffer_.release());
      currentBuffer_ = boost::ptr_container::move(newBuffer1);
//buffers_中缓冲交由局部变量buffersToWrite处理,buffers_继续接受新缓存的插入
      buffersToWrite.swap(buffers_);
      if (!nextBuffer_)
      {
        nextBuffer_ = boost::ptr_container::move(newBuffer2);
      }
    }
    assert(!buffersToWrite.empty());
//最大队列数为25, 大于25则对之后进行erase
    if (buffersToWrite.size() > 25)
    {
      char buf[256];
      snprintf(buf, sizeof buf, "Dropped log messages at %s, %zd larger buffers\n",
               Timestamp::now().toFormattedString().c_str(),
               buffersToWrite.size()-2);
      fputs(buf, stderr);
      output.append(buf, static_cast<int>(strlen(buf)));
      buffersToWrite.erase(buffersToWrite.begin()+25, buffersToWrite.end());
    }
//对buffers_队列进行写操作
    for (size_t i = 0; i < buffersToWrite.size(); ++i)
    {
      // FIXME: use unbuffered stdio FILE ? or use ::writev ?
      output.append(buffersToWrite[i].data(), buffersToWrite[i].length());
    }
//只留下两个buffer作为零时buffer1和buffer2
    if (buffersToWrite.size() > 2)
    {
      // drop non-bzero-ed buffers, avoid trashing
      buffersToWrite.resize(2);
    }
//重置buffer1
    if (!newBuffer1)
    {
      assert(!buffersToWrite.empty());
      newBuffer1 = buffersToWrite.pop_back();
      newBuffer1->reset();
    }
//重置buffer2
    if (!newBuffer2)
    {
      assert(!buffersToWrite.empty());
      newBuffer2 = buffersToWrite.pop_back();
      newBuffer2->reset();
    }
    buffersToWrite.clear();
    output.flush();
  }
  output.flush();
}

2.ThreadLocal线程内数据

muduo的线程内数据是使用pthread提供的本地键值存储区,pthread的首先传入析构函数创建存储键

pthread_key_create(&pkey_,&ThreadLocal::destructor);

然后想创建的键设置值

pthread_setspecific(pkey_,newObj);

muduo做了一个泛型处理,通过ThreadLocal<T>()方式来创建线程内本地变量

 
T& value()
{
T* perThreadValue = static_cast<T*>(pthread_getspecific(pkey_));
    if (!perThreadValue) {
      T* newObj = new T();
      pthread_setspecific(pkey_, newObj);
      perThreadValue = newObj;
    }
return *perThreadValue;
}
ThreadLocal析构的时候我们注意到一个细节,就是检查不完全类型,因为对不完全类型执行delete操作是没有意义的,但是在windows下面不完全类型是无法成功编译的

 
 
static void destructor(void *x)
{
    T* obj = static_cast<T*>(x);
    typedef char T_must_be_complete_type[sizeof(T) == 0 ? -1 : 1];
    T_must_be_complete_type dummy; (void) dummy;
    delete obj;
}

3.单例类

muduo还提供了两种单例类,一般的单例类和线程内单例类

singleton是普通的单例类,通缩pthread_once创建的实例,保证所有线程只对构造函数进行一次初始化

 
static T& instance()
{
    pthread_once(&ponce_, &Singleton::init);
    return *value_;
}
其中ponce_为单次初始化变量,value_为类内静态变量

 
private:
  static pthread_once_t ponce_;
  static T* value_;
pthread_once保证了只初始化一次,再来看ThreadLocalSingleton,和ThrealLocal相似,同样用到了pthread_key技术

 
static T& instance()
{
    if (!t_value_)
    {
      t_value_ = new T();
      deleter_.set(t_value_);
    }
    return *t_value_;
}
但是资源是通过一个Delete类型的实例管理,Delete提供三个函数的实现

 
Deleter()
{
    pthread_key_create(&pkey_, &ThreadLocalSingleton::destructor);
}
~Deleter()
{
    pthread_key_delete(pkey_);
}
void set(T* newObj)
{
    assert(pthread_getspecific(pkey_) == NULL);
    pthread_setspecific(pkey_, newObj);
}
构造函数创建变量键,析构函数删除键对应的值,设置函数设置线程内变量,一般使用pthread_key是为了线程内资源能够正确释放

对于ThreadLocalSingleton类来说,t_value_定义为线程内变量,这样每个线程都能对进行单例类的实例化

 
static __thread T* t_value_;
static Deleter deleter_;

4.线程池

muduo也提供一个可用的简单线程池ThreadPool,0没事线程中跑RunInThread函数,不断从任务列表queue_中取任务,然后执行,循环往复

 

 
void ThreadPool::runInThread()
{
  try
  {
    while (running_)
    {
      Task task(take());
      if (task)
      {
        task();
      }
    }
  }
  catch (const Exception& ex)
  {
    fprintf(stderr, "exception caught in ThreadPool %s\n", name_.c_str());
    fprintf(stderr, "reason: %s\n", ex.what());
    fprintf(stderr, "stack trace: %s\n", ex.stackTrace());
    abort();
  }
  catch (const std::exception& ex)
  {
    fprintf(stderr, "exception caught in ThreadPool %s\n", name_.c_str());
    fprintf(stderr, "reason: %s\n", ex.what());
    abort();
  }
  catch (...)
  {
    fprintf(stderr, "unknown exception caught in ThreadPool %s\n", name_.c_str());
    throw; // rethrow
  }
}

取任务函数take自然是要加锁的

 
ThreadPool::Task ThreadPool::take()
{
  MutexLockGuard lock(mutex_);
  // always use a while-loop, due to spurious wakeup
  while (queue_.empty() && running_)
  {
//空闲等待
    notEmpty_.wait();
  }
  Task task;
  if (!queue_.empty())
  {
    task = queue_.front();
    queue_.pop_front();
    if (maxQueueSize_ > 0)
    {
//通知run函数中的条件等待,当前队列有空闲,可以接受新任务
      notFull_.notify();
    }
  }
  return task;
}

run函数是运行任务的入口,这里我们贴一个右值参数重载函数

 
  
void ThreadPool::run(Task&& task)
{
  if (threads_.empty())
  {
//队列为空直接运行任务
    task();
  }
  else
  {
    MutexLockGuard lock(mutex_);
    while (isFull())
    {
//满队列则等待
      notFull_.wait();
    }
    assert(!isFull());
//move方法可以提高效率
    queue_.push_back(std::move(task));
//通知线程可以取任务了
    notEmpty_.notify();
  }
}

5.其它.

基础部分还有一些常用类,比如日志记录,文件操作,时间操作,就不一一分析了


网络编程库(muduo/net)

muduo额net目录包含了网络编程的Reactor模式的实现

1.Acceptor接受者

Acceptor在Reactor模式中作为一个接受者的一个实现,负责监听和创建连接.Acceptor类除了构造函数提供两个接口,一个是设置新连接回调函数,另一个监听函数.大体流程是
a.创建服务器监听socket
b.创建连接通道
c.开始socket监听,设置channel的读回调
d.在channel的读回调中进行accept,成功则回调新连接回调函数
看一下listen函数的实现:
  
void Acceptor :: listen ()
{
   loop_ -> assertInLoopThread ();
   listenning_ = true ;
//调用非阻塞socket的listen方法
   acceptSocket_ . listen ();
//使读,处理接收回调,这点体现在TcpServer的newConnection回调函数
   acceptChannel_ . enableReading ();
}
有连接时调用handleRead回调
  
void Acceptor :: handleRead ()
{
   loop_ -> assertInLoopThread ();
   InetAddress peerAddr ( 0 );
   //FIXME loop until no more
   int connfd = acceptSocket_ . accept ( & peerAddr );
   if ( connfd >= 0 )
   {
     // string hostport = peerAddr.toIpPort();
     // LOG_TRACE << "Accepts of " << hostport;
     if ( newConnectionCallback_ )
     {
//绑定了TcpServer中的newConnection回调方法, 根据sockfd和远端地址创建TcpConnection
       newConnectionCallback_ ( connfd , peerAddr );
     }
     else
     {
       sockets :: close ( connfd );
     }
   }
   else
   {
     LOG_SYSERR << "in Acceptor::handleRead" ;
     // Read the section named "The special problem of
     // accept()ing when you can't" in libev's doc.
     // By Marc Lehmann, author of livev.
     if ( errno == EMFILE )
     {
       :: close ( idleFd_ );
       idleFd_ = :: accept ( acceptSocket_ . fd (), NULL , NULL );
       :: close ( idleFd_ );
       idleFd_ = :: open ( "/dev/null" , O_RDONLY | O_CLOEXEC );
     }
   }
}

2.Connector连接者

Connector负责处理客户端的连接请求,我们来看一下connect方法:
  
void Connector :: connect ()
{
//非阻塞socket
   int sockfd = sockets :: createNonblockingOrDie ();
//进行连接
   int ret = sockets :: connect ( sockfd , serverAddr_ . getSockAddrInet ());
   int savedErrno = ( ret == 0 ) ? 0 : errno ;
   switch ( savedErrno )
   {
     case 0:
     case EINPROGRESS:
     case EINTR:
     case EISCONN:
//成功处理
       connecting ( sockfd );
       break ;
     case EAGAIN:
     case EADDRINUSE:
     case EADDRNOTAVAIL:
     case ECONNREFUSED:
     case ENETUNREACH:
//失败重试
       retry ( sockfd );
       break ;
     case EACCES:
     case EPERM:
     case EAFNOSUPPORT:
     case EALREADY:
     case EBADF:
     case EFAULT:
     case ENOTSOCK:
//发生不可处理的错误,关闭socket
       LOG_SYSERR << "connect error in Connector::startInLoop " << savedErrno ;
       sockets :: close ( sockfd );
       break ;
     default:
       LOG_SYSERR << "Unexpected error in Connector::startInLoop " << savedErrno ;
       sockets :: close ( sockfd );
       // connectErrorCallback_();
       break ;
   }
}
来看一下connecting和retry分别如何处理
  
void Connector :: connecting ( int sockfd )
{
   setState ( kConnecting );
   assert ( ! channel_ );
   channel_ . reset ( new Channel ( loop_ , sockfd ));
//创建管道写回调,在在回调中新建连接
   channel_ -> setWriteCallback (
       boost :: bind ( & Connector :: handleWrite , this )); // FIXME: unsafe
   channel_ -> setErrorCallback (
       boost :: bind ( & Connector :: handleError , this )); // FIXME: unsafe
   // channel_->tie(shared_from_this()); is not working,
   // as channel_ is not managed by shared_ptr
   channel_ -> enableWriting ();
}
handleWrite回调方法
  
void Connector :: handleWrite ()
{
   LOG_TRACE << "Connector::handleWrite " << state_ ;
   if ( state_ == kConnecting )
   {
//重置管道,释放资源
     int sockfd = removeAndResetChannel ();
     int err = sockets :: getSocketError ( sockfd );
     if ( err )
     {
       LOG_WARN << "Connector::handleWrite - SO_ERROR = "
                << err << " " << strerror_tl ( err );
       retry ( sockfd );
     }
     else if ( sockets :: isSelfConnect ( sockfd ))
     {
       LOG_WARN << "Connector::handleWrite - Self connect" ;
       retry ( sockfd );
     }
     else
     {
       setState ( kConnected );
       if ( connect_ )
       {
//回调TcpClient中的newConnection新建连接,也是通过TcpConnection来管理连接
         newConnectionCallback_ ( sockfd );
       }
       else
       {
         sockets :: close ( sockfd );
       }
     }
   }
   else
   {
     // what happened?
     assert ( state_ == kDisconnected );
   }
}
重试函数retry
  
void Connector :: retry ( int sockfd )
{
   sockets :: close ( sockfd );
   setState ( kDisconnected );
   if ( connect_ )
   {
     LOG_INFO << "Connector::retry - Retry connecting to " << serverAddr_ . toIpPort ()
              << " in " << retryDelayMs_ << " milliseconds. " ;
//利用定时器延迟运行
     loop_ -> runAfter ( retryDelayMs_ / 1000.0 ,
                     boost :: bind ( & Connector :: startInLoop , shared_from_this ()));
     retryDelayMs_ = std :: min ( retryDelayMs_ * 2 , kMaxRetryDelayMs );
   }
   else
   {
     LOG_DEBUG << "do not connect" ;
   }
}

2.Channel连接管道

管道通过事件方式来管理,提供多种回调函数对相应事件进行处理.其中事件包括:
a.读事件
b.写事件
c.无事件
回调包含:
a.管道读回调
b.管道写回调
c.管道关闭回调
d.管道错误回调
主要操作handlevent中进行处理,调应相应回调函数
每次更新事件状态会促使事件轮循器更新当前通道.
channel的更新是通过事件管理器电泳poller进行相应的更新。
  
void Channel :: update ()
{
   loop_ -> updateChannel ( this );
}
  
void EventLoop :: updateChannel ( Channel * channel )
{
   assert ( channel -> ownerLoop () == this );
   assertInLoopThread ();
//掉一片那个poller进行更新
   poller_ -> updateChannel ( channel );
}
在事件管理器的主循环中有这段代码负责对poller更新过的channel的事件处理
  
for ( ChannelList :: iterator it = activeChannels_ . begin ();
         it != activeChannels_ . end (); ++ it )
{
    currentActiveChannel_ = * it ;
    currentActiveChannel_ -> handleEvent ( pollReturnTime_ );
}

3.EventLoop事件管理器

EventLoop衔接poller和channel,一旦poller产生新的事件,channel得到状态更新。

eventloop的多进程间的唤醒机制值得注意一下,muduo使用的是eventfd。eventfd创建之后提供一个write方法写入一个八字节的整形变量到内核计数器,read方法从内核计数器中读取计数,并置计数器为0,如果是blocking模式,read函数会一直阻塞。

eventloop的主循环方法loop,提供poll操作和channel的更新。我们来看一下源码:

  
void EventLoop::loop()
{
  assert(!looping_);
  assertInLoopThread();
  looping_ = true;
  quit_ = false; // FIXME: what if someone calls quit() before loop() ?
  LOG_TRACE << "EventLoop " << this << " start looping";
  while (!quit_)
  {
    activeChannels_.clear();
//poll操作,具体是poll还是epoll方式可以通过配置MUDUO_USE_POLL变量来实现,返回的poll的时间
    pollReturnTime_ = poller_->poll(kPollTimeMs, &activeChannels_);
    ++iteration_;
//日志记录
    if (Logger::logLevel() <= Logger::TRACE)
    {
      printActiveChannels();
    }
    // TODO sort channel by priority
    eventHandling_ = true;
//遍历poll操作中有事件变化的channel,对channel进行上一poll时刻的事件处理。
    for (ChannelList::iterator it = activeChannels_.begin();
        it != activeChannels_.end(); ++it)
    {
      currentActiveChannel_ = *it;
      currentActiveChannel_->handleEvent(pollReturnTime_);
    }
    currentActiveChannel_ = NULL;
    eventHandling_ = false;
//执行额外的函数
    doPendingFunctors();
}
    
 LOG_TRACE << "EventLoop " << this << " stop looping";
 looping_ = false;
}

eventloop也可以作为线程池来使用,提供如下方法

  
//runInLoop 的行为是函调用在时间循环线程中,则立即执行,否则添加到事件队列
void EventLoop::runInLoop(const Functor& cb)
{
  if (isInLoopThread())
  {
    cb();
  }
  else
  {
    queueInLoop(cb);
  }
}
//事件队列的实现
void EventLoop::queueInLoop(const Functor& cb)
{
  {
  MutexLockGuard lock(mutex_);
  pendingFunctors_.push_back(cb);
  }
  if (!isInLoopThread() || callingPendingFunctors_)
  {
//唤醒线程
    wakeup();
  }
}
再来看看EventLoop如何实现线程唤醒。

  
void EventLoop::wakeup()
{
  uint64_t one = 1;
//向内核计数器写8字节整数
  ssize_t n = sockets::write(wakeupFd_, &one, sizeof one);
  if (n != sizeof one)
  {
    LOG_ERROR << "EventLoop::wakeup() writes " << n << " bytes instead of 8";
  }
}
wakeupChannel当有数据时回调handleRead函数

  
void EventLoop::handleRead()
{
  uint64_t one = 1;
  ssize_t n = sockets::read(wakeupFd_, &one, sizeof one);
//如果buffer的长度小于8那么read会失败, 错误代码被设置成 EINVAL。
  if (n != sizeof one)
  {
    LOG_ERROR << "EventLoop::handleRead() reads " << n << " bytes instead of 8";
  }
}
线程被唤醒。

EventLoop也提供了定时方法,实现用的是系统定时器,这部分我们放在TimeQueue中进行分析

  
TimerId EventLoop::runAt(const Timestamp& time, const TimerCallback& cb)
{
  return timerQueue_->addTimer(cb, time, 0.0);
}
TimerId EventLoop::runAfter(double delay, const TimerCallback& cb)
{
  Timestamp time(addTime(Timestamp::now(), delay));
  return runAt(time, cb);
}
TimerId EventLoop::runEvery(double interval, const TimerCallback& cb)
{
  Timestamp time(addTime(Timestamp::now(), interval));
  return timerQueue_->addTimer(cb, time, interval);
}

4.EventLoopThread事件管理器线程

EventLoopThread是EventLoop的线程包装。我们来看它的两个核心方法,startLoop和threadFunc。
threadFunc跑在独立的线程内。
  
void EventLoopThread::threadFunc()
{
  EventLoop loop;
  if (callback_)
  {
//回调线程初始化成功回调函数。
    callback_(&loop);
  }
  {
    MutexLockGuard lock(mutex_);
//初始化线程内成员变量loop_,为了防止startLoop跑在这段代码之前,需要利用信号量进行同步,通知startLoop线程初 //始化完毕
    loop_ = &loop;
    cond_.notify();
  }
//启动EventLoop的循环方法,开始处理事件
  loop.loop();
  //assert(exiting_);
  loop_ = NULL;
}
再来看startLoop
  
EventLoop* EventLoopThread::startLoop()
{
//启动线程
  assert(!thread_.started());
  thread_.start();
  {
//确认线程启动完成
    MutexLockGuard lock(mutex_);
    while (loop_ == NULL)
    {
      cond_.wait();
    }
  }
  return loop_;
}

5.EventLoopThreadPool事件管理器线程池

EventLoopThread的实现相对简单,没有ThreadPool复杂,只是维护了一个EventLoopThread队列,同事提供遍历器方法。有兴趣可以自行查看源码。

6.TcpConnention链接会话管理

TcpConnection负责管理一对连接,构建参数为事件管理器,连接名称,sock句柄,本地地址,对端地址。TcpConnection的主要任务时为channel设置好回调函数,包括读,写,关闭和错误回调函数。
我们来看看send方法,同时提供一个sendInLoop方法。
  
void TcpConnection::send(const void* data, size_t len)
{
  if (state_ == kConnected)
  {
//如果是在事件管理器中电泳则直接调用sendInloop发送数据
    if (loop_->isInLoopThread())
    {
      sendInLoop(data, len);
    }
//否则放到事件管理器的pendingFunctors_队列中,调用EventLoop的runInLoop会对线程进行唤醒。
    else
    {
      string message(static_cast<const char*>(data), len);
      loop_->runInLoop(
          boost::bind(&TcpConnection::sendInLoop,
                      this, // FIXME
                      message));
    }
  }
}
再来看看senInLoop的实现
  
void TcpConnection::sendInLoop(const void* data, size_t len)
{
//必须在事件管理器线程内
  loop_->assertInLoopThread();
  ssize_t nwrote = 0;
  size_t remaining = len;
  bool faultError = false;
  if (state_ == kDisconnected)
  {
    LOG_WARN << "disconnected, give up writing";
    return;
  }
  // if no thing in output queue, try writing directly
//channel写空闲时可以进行写操作
  if (!channel_->isWriting() && outputBuffer_.readableBytes() == 0)
  {
    nwrote = sockets::write(channel_->fd(), data, len);
    if (nwrote >= 0)
    {
      remaining = len - nwrote;
//写完成则回调写完成回调函数
      if (remaining == 0 && writeCompleteCallback_)
      {
//优先级比较低的使用queueInLoop
        loop_->queueInLoop(boost::bind(writeCompleteCallback_, shared_from_this()));
      }
    }
    else // nwrote < 0
    {
      nwrote = 0;
      if (errno != EWOULDBLOCK)
      {
        LOG_SYSERR << "TcpConnection::sendInLoop";
        if (errno == EPIPE || errno == ECONNRESET) // FIXME: any others?
        {
          faultError = true;
        }
      }
    }
  }
  assert(remaining <= len);
  if (!faultError && remaining > 0)
  {
    size_t oldLen = outputBuffer_.readableBytes();
//如果输出缓冲达到水位区,进行高水位处理
    if (oldLen + remaining >= highWaterMark_
        && oldLen < highWaterMark_
        && highWaterMarkCallback_)
    {
      loop_->queueInLoop(boost::bind(highWaterMarkCallback_, shared_from_this(), oldLen + remaining));
    }
//剩余的数据放在输出缓冲中在write回调handleWrite进行写操作。
    outputBuffer_.append(static_cast<const char*>(data)+nwrote, remaining);
    if (!channel_->isWriting())
    {
      channel_->enableWriting();
    }
  }
}
顺次看下handleWrite对余下数据的处理
  
void TcpConnection::handleWrite()
{
  loop_->assertInLoopThread();
//确认是连续写
  if (channel_->isWriting())
  {
//向socket此写入剩余数据
    ssize_t n = sockets::write(channel_->fd(),
                               outputBuffer_.peek(),
                               outputBuffer_.readableBytes());
    if (n > 0)
    {
      outputBuffer_.retrieve(n);
//写缓存被完全写完执行完成回调操作,关闭管道写状态,否则,不关闭写状态,那么下次handleWrite需要对剩余的数据举 //行写操作,直到全部写完
      if (outputBuffer_.readableBytes() == 0)
      {
        channel_->disableWriting();
        if (writeCompleteCallback_)
        {
          loop_->queueInLoop(boost::bind(writeCompleteCallback_, shared_from_this()));
        }
        if (state_ == kDisconnecting)
        {
          shutdownInLoop();
        }
      }
    }
    else
    {
      LOG_SYSERR << "TcpConnection::handleWrite";
      // if (state_ == kDisconnecting)
      // {
      // shutdownInLoop();
      // }
    }
  }
  else
  {
    LOG_TRACE << "Connection fd = " << channel_->fd()
              << " is down, no more writing";
  }
}
其它部分的处理可以自行观看.

7.TcpServer服务器

有了前面的基础,再来看TcpServer的实现就比较简单了,TcpServer构造方法提供四个参数,分别是事件管理器,监听地址,服务器名称和服务器参数.
TcpServer有一个接受者acceptor来接受连接,接收器跑在线程池内可以同时接受多个连接.
   
void TcpServer::start()
{
  if (started_.getAndSet(1) == 0)
  {
//启动线程池
    threadPool_->start(threadInitCallback_);
    assert(!acceptor_->listenning());
//启动监听
    loop_->runInLoop(
        boost::bind(&Acceptor::listen, get_pointer(acceptor_)));
  }
}
TcpServer在构造函数中设置了acceptor的新连接回调
   
acceptor_->setNewConnectionCallback(
      boost::bind(&TcpServer::newConnection, this, _1, _2));
我们来看newConnection回调中如何处理新的连接
   
void TcpServer::newConnection(int sockfd, const InetAddress& peerAddr)
{
  loop_->assertInLoopThread();
//获取线程池下一个事件管理器线程,循环使用线程池内的线程能保证线程池中的线程都被使用
  EventLoop* ioLoop = threadPool_->getNextLoop();
  char buf[32];
  snprintf(buf, sizeof buf, ":%s#%d", hostport_.c_str(), nextConnId_);
  ++nextConnId_;
  string connName = name_ + buf;
  LOG_INFO << "TcpServer::newConnection [" << name_
           << "] - new connection [" << connName
           << "] from " << peerAddr.toIpPort();
  InetAddress localAddr(sockets::getLocalAddr(sockfd));
  // FIXME poll with zero timeout to double confirm the new connection
  // FIXME use make_shared if necessary
//根据local和peer的address建立TcpConnection管理一对连接
  TcpConnectionPtr conn(new TcpConnection(ioLoop,
                                          connName,
                                          sockfd,
                                          localAddr,
                                          peerAddr));
//进行映射
  connections_[connName] = conn;
  conn->setConnectionCallback(connectionCallback_);
  conn->setMessageCallback(messageCallback_);
  conn->setWriteCompleteCallback(writeCompleteCallback_);
handleClose时调用,用于从TcpServer维护的连接map中删除相迎的连接 //在TcpConnection中处理
  conn->setCloseCallback(
      boost::bind(&TcpServer::removeConnection, this, _1)); // FIXME: unsafe
  ioLoop->runInLoop(boost::bind(&TcpConnection::connectEstablished, conn));
}
   
{
  connector_->setNewConnectionCallback(
      boost::bind(&TcpClient::newConnection, this, _1));
  // FIXME setConnectFailedCallback
  LOG_INFO << "TcpClient::TcpClient[" << name_
           << "] - connector " << get_pointer(connector_);
}

8.TcpClient客户端

TcpClient的主要工作交由Connector来完成,构造函数中设置了新连接回调函数
  
{
  connector_->setNewConnectionCallback(
      boost::bind(&TcpClient::newConnection, this, _1));
  // FIXME setConnectFailedCallback
  LOG_INFO << "TcpClient::TcpClient[" << name_
           << "] - connector " << get_pointer(connector_);
}
看一下newConnection中如何进行处理的。
  
void TcpClient::newConnection(int sockfd)
{
  loop_->assertInLoopThread();
  InetAddress peerAddr(sockets::getPeerAddr(sockfd));
  char buf[32];
  snprintf(buf, sizeof buf, ":%s#%d", peerAddr.toIpPort().c_str(), nextConnId_);
  ++nextConnId_;
  string connName = name_ + buf;
  InetAddress localAddr(sockets::getLocalAddr(sockfd));
  // FIXME poll with zero timeout to double confirm the new connection
  // FIXME use make_shared if necessary
同样是通过TcpConnection来管理一对连接
  TcpConnectionPtr conn(new TcpConnection(loop_,
                                          connName,
                                          sockfd,
                                          localAddr,
                                          peerAddr));
  conn->setConnectionCallback(connectionCallback_);
  conn->setMessageCallback(messageCallback_);
  conn->setWriteCompleteCallback(writeCompleteCallback_);
//设置连接关闭回调,通知TcpClient重置连接
  conn->setCloseCallback(
      boost::bind(&TcpClient::removeConnection, this, _1)); // FIXME: unsafe
  {
    MutexLockGuard lock(mutex_);
    connection_ = conn;
  }
  conn->connectEstablished();
}

9.TimeQueue时间队列

TimeQueue的事件也是通过poller来管理,所以需要创建channel来进行事件通知。
  
//当时间到了,有写入发生,触发读回调
timerfdChannel_ . setReadCallback (
       boost :: bind ( & TimerQueue :: handleRead , this ));
   // we are always reading the timerfd, we disarm it with timerfd_settime.
timerfdChannel_ . enableReading ();
handleRead回调
  
void TimerQueue :: handleRead ()
{
   loop_ -> assertInLoopThread ();
   Timestamp now ( Timestamp :: now ());
   readTimerfd ( timerfd_ , now );
//获取超时队列
   std :: vector < Entry > expired = getExpired ( now );
   callingExpiredTimers_ = true ;
   cancelingTimers_ . clear ();
   // safe to callback outside critical section
//执行定时回调函数
   for ( std :: vector < Entry >:: iterator it = expired . begin ();
       it != expired . end (); ++ it )
   {
     it -> second -> run ();
   }
   callingExpiredTimers_ = false ;
   reset ( expired , now );
}

10.Poller

11.Socket









评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值