上次阅读了caolan的async.js源码,这次学习陈硕的多线to程网络库muduo,github地址(https://github.com/chenshuo/muduo).
知识补充:
posix thread(http://en.wikipedia.org/wiki/POSIX_Threads)
boost library(http://www.boost.org/)
基础类库(muduo/base)
1.异步日志AsyncLogging
异步日志记录类通过建立两个缓冲区,当前缓冲区进行日志的存,nextBuffer用作用于保存预留缓冲区,当当前缓冲区满的时候,切换到nextBuffer进行日志的append操作,由于跑在多线程下,当然是要加锁的
写日志线程函数:void AsyncLogging::append(const char* logline, int len){muduo::MutexLockGuard lock(mutex_);//如果当前缓冲去没满,直接添加到当前缓冲区,否则将当前缓冲区push到写队列,同时利用move语意占有nextBuffer_的内容.if (currentBuffer_->avail() > len){currentBuffer_->append(logline, len);}else{buffers_.push_back(currentBuffer_.release());if (nextBuffer_){currentBuffer_ = boost::ptr_container::move(nextBuffer_);}else{//实际使用中确实很难出现这种情况,只有当nextBuffer_也写满了,同时写线程队列还未处理完成,一般来说两个buffe够用了currentBuffer_.reset(new Buffer); // Rarely happens}currentBuffer_->append(logline, len);//对写线程进行通知,在一个flush周期内有效,否则写线程不进行当前缓冲区的操作,而是处理buffers_队列的其它缓冲.cond_.notify();}}
void AsyncLogging::threadFunc(){assert(running_ == true);latch_.countDown();LogFile output(basename_, rollSize_, false);BufferPtr newBuffer1(new Buffer);BufferPtr newBuffer2(new Buffer);newBuffer1->bzero();newBuffer2->bzero();BufferVector buffersToWrite;buffersToWrite.reserve(16);while (running_){assert(newBuffer1 && newBuffer1->length() == 0);assert(newBuffer2 && newBuffer2->length() == 0);assert(buffersToWrite.empty());{muduo::MutexLockGuard lock(mutex_);//buffers_中没有内容,就只能等待append中的信号if (buffers_.empty()) // unusual usage!{cond_.waitForSeconds(flushInterval_);}//操作当前缓冲区,置当前缓冲区值为零时缓冲区1,利用move语意提高性能buffers_.push_back(currentBuffer_.release());currentBuffer_ = boost::ptr_container::move(newBuffer1);//buffers_中缓冲交由局部变量buffersToWrite处理,buffers_继续接受新缓存的插入buffersToWrite.swap(buffers_);if (!nextBuffer_){nextBuffer_ = boost::ptr_container::move(newBuffer2);}}assert(!buffersToWrite.empty());//最大队列数为25, 大于25则对之后进行eraseif (buffersToWrite.size() > 25){char buf[256];snprintf(buf, sizeof buf, "Dropped log messages at %s, %zd larger buffers\n",Timestamp::now().toFormattedString().c_str(),buffersToWrite.size()-2);fputs(buf, stderr);output.append(buf, static_cast<int>(strlen(buf)));buffersToWrite.erase(buffersToWrite.begin()+25, buffersToWrite.end());}//对buffers_队列进行写操作for (size_t i = 0; i < buffersToWrite.size(); ++i){// FIXME: use unbuffered stdio FILE ? or use ::writev ?output.append(buffersToWrite[i].data(), buffersToWrite[i].length());}//只留下两个buffer作为零时buffer1和buffer2if (buffersToWrite.size() > 2){// drop non-bzero-ed buffers, avoid trashingbuffersToWrite.resize(2);}//重置buffer1if (!newBuffer1){assert(!buffersToWrite.empty());newBuffer1 = buffersToWrite.pop_back();newBuffer1->reset();}//重置buffer2if (!newBuffer2){assert(!buffersToWrite.empty());newBuffer2 = buffersToWrite.pop_back();newBuffer2->reset();}buffersToWrite.clear();output.flush();}output.flush();}
2.ThreadLocal线程内数据
muduo的线程内数据是使用pthread提供的本地键值存储区,pthread的首先传入析构函数创建存储键
pthread_key_create(&pkey_,&ThreadLocal::destructor);
然后想创建的键设置值
pthread_setspecific(pkey_,newObj);
muduo做了一个泛型处理,通过ThreadLocal<T>()方式来创建线程内本地变量
ThreadLocal析构的时候我们注意到一个细节,就是检查不完全类型,因为对不完全类型执行delete操作是没有意义的,但是在windows下面不完全类型是无法成功编译的T& value(){T* perThreadValue = static_cast<T*>(pthread_getspecific(pkey_));if (!perThreadValue) {T* newObj = new T();pthread_setspecific(pkey_, newObj);perThreadValue = newObj;}return *perThreadValue;}
static void destructor(void *x){T* obj = static_cast<T*>(x);typedef char T_must_be_complete_type[sizeof(T) == 0 ? -1 : 1];T_must_be_complete_type dummy; (void) dummy;delete obj;}
3.单例类
muduo还提供了两种单例类,一般的单例类和线程内单例类
singleton是普通的单例类,通缩pthread_once创建的实例,保证所有线程只对构造函数进行一次初始化
其中ponce_为单次初始化变量,value_为类内静态变量static T& instance(){pthread_once(&ponce_, &Singleton::init);return *value_;}
pthread_once保证了只初始化一次,再来看ThreadLocalSingleton,和ThrealLocal相似,同样用到了pthread_key技术private:static pthread_once_t ponce_;static T* value_;
但是资源是通过一个Delete类型的实例管理,Delete提供三个函数的实现static T& instance(){if (!t_value_){t_value_ = new T();deleter_.set(t_value_);}return *t_value_;}
构造函数创建变量键,析构函数删除键对应的值,设置函数设置线程内变量,一般使用pthread_key是为了线程内资源能够正确释放Deleter(){pthread_key_create(&pkey_, &ThreadLocalSingleton::destructor);}~Deleter(){pthread_key_delete(pkey_);}void set(T* newObj){assert(pthread_getspecific(pkey_) == NULL);pthread_setspecific(pkey_, newObj);}
对于ThreadLocalSingleton类来说,t_value_定义为线程内变量,这样每个线程都能对进行单例类的实例化
static __thread T* t_value_;static Deleter deleter_;
4.线程池
muduo也提供一个可用的简单线程池ThreadPool,0没事线程中跑RunInThread函数,不断从任务列表queue_中取任务,然后执行,循环往复
void ThreadPool::runInThread(){try{while (running_){Task task(take());if (task){task();}}}catch (const Exception& ex){fprintf(stderr, "exception caught in ThreadPool %s\n", name_.c_str());fprintf(stderr, "reason: %s\n", ex.what());fprintf(stderr, "stack trace: %s\n", ex.stackTrace());abort();}catch (const std::exception& ex){fprintf(stderr, "exception caught in ThreadPool %s\n", name_.c_str());fprintf(stderr, "reason: %s\n", ex.what());abort();}catch (...){fprintf(stderr, "unknown exception caught in ThreadPool %s\n", name_.c_str());throw; // rethrow}}
取任务函数take自然是要加锁的
ThreadPool::Task ThreadPool::take(){MutexLockGuard lock(mutex_);// always use a while-loop, due to spurious wakeupwhile (queue_.empty() && running_){//空闲等待notEmpty_.wait();}Task task;if (!queue_.empty()){task = queue_.front();queue_.pop_front();if (maxQueueSize_ > 0){//通知run函数中的条件等待,当前队列有空闲,可以接受新任务notFull_.notify();}}return task;}
run函数是运行任务的入口,这里我们贴一个右值参数重载函数
void ThreadPool::run(Task&& task){if (threads_.empty()){//队列为空直接运行任务task();}else{MutexLockGuard lock(mutex_);while (isFull()){//满队列则等待notFull_.wait();}assert(!isFull());//move方法可以提高效率queue_.push_back(std::move(task));//通知线程可以取任务了notEmpty_.notify();}}
5.其它.
基础部分还有一些常用类,比如日志记录,文件操作,时间操作,就不一一分析了
网络编程库(muduo/net)
muduo额net目录包含了网络编程的Reactor模式的实现
1.Acceptor接受者
有连接时调用handleRead回调void Acceptor :: listen (){loop_ -> assertInLoopThread ();listenning_ = true ;//调用非阻塞socket的listen方法acceptSocket_ . listen ();//使读,处理接收回调,这点体现在TcpServer的newConnection回调函数acceptChannel_ . enableReading ();}
void Acceptor :: handleRead (){loop_ -> assertInLoopThread ();InetAddress peerAddr ( 0 );//FIXME loop until no moreint connfd = acceptSocket_ . accept ( & peerAddr );if ( connfd >= 0 ){// string hostport = peerAddr.toIpPort();// LOG_TRACE << "Accepts of " << hostport;if ( newConnectionCallback_ ){//绑定了TcpServer中的newConnection回调方法, 根据sockfd和远端地址创建TcpConnectionnewConnectionCallback_ ( connfd , peerAddr );}else{sockets :: close ( connfd );}}else{LOG_SYSERR << "in Acceptor::handleRead" ;// Read the section named "The special problem of// accept()ing when you can't" in libev's doc.// By Marc Lehmann, author of livev.if ( errno == EMFILE ){:: close ( idleFd_ );idleFd_ = :: accept ( acceptSocket_ . fd (), NULL , NULL );:: close ( idleFd_ );idleFd_ = :: open ( "/dev/null" , O_RDONLY | O_CLOEXEC );}}}
2.Connector连接者
来看一下connecting和retry分别如何处理void Connector :: connect (){//非阻塞socketint sockfd = sockets :: createNonblockingOrDie ();//进行连接int ret = sockets :: connect ( sockfd , serverAddr_ . getSockAddrInet ());int savedErrno = ( ret == 0 ) ? 0 : errno ;switch ( savedErrno ){case 0:case EINPROGRESS:case EINTR:case EISCONN://成功处理connecting ( sockfd );break ;case EAGAIN:case EADDRINUSE:case EADDRNOTAVAIL:case ECONNREFUSED:case ENETUNREACH://失败重试retry ( sockfd );break ;case EACCES:case EPERM:case EAFNOSUPPORT:case EALREADY:case EBADF:case EFAULT:case ENOTSOCK://发生不可处理的错误,关闭socketLOG_SYSERR << "connect error in Connector::startInLoop " << savedErrno ;sockets :: close ( sockfd );break ;default:LOG_SYSERR << "Unexpected error in Connector::startInLoop " << savedErrno ;sockets :: close ( sockfd );// connectErrorCallback_();break ;}}
handleWrite回调方法void Connector :: connecting ( int sockfd ){setState ( kConnecting );assert ( ! channel_ );channel_ . reset ( new Channel ( loop_ , sockfd ));//创建管道写回调,在在回调中新建连接channel_ -> setWriteCallback (boost :: bind ( & Connector :: handleWrite , this )); // FIXME: unsafechannel_ -> setErrorCallback (boost :: bind ( & Connector :: handleError , this )); // FIXME: unsafe// channel_->tie(shared_from_this()); is not working,// as channel_ is not managed by shared_ptrchannel_ -> enableWriting ();}
重试函数retryvoid Connector :: handleWrite (){LOG_TRACE << "Connector::handleWrite " << state_ ;if ( state_ == kConnecting ){//重置管道,释放资源int sockfd = removeAndResetChannel ();int err = sockets :: getSocketError ( sockfd );if ( err ){LOG_WARN << "Connector::handleWrite - SO_ERROR = "<< err << " " << strerror_tl ( err );retry ( sockfd );}else if ( sockets :: isSelfConnect ( sockfd )){LOG_WARN << "Connector::handleWrite - Self connect" ;retry ( sockfd );}else{setState ( kConnected );if ( connect_ ){//回调TcpClient中的newConnection新建连接,也是通过TcpConnection来管理连接newConnectionCallback_ ( sockfd );}else{sockets :: close ( sockfd );}}}else{// what happened?assert ( state_ == kDisconnected );}}
void Connector :: retry ( int sockfd ){sockets :: close ( sockfd );setState ( kDisconnected );if ( connect_ ){LOG_INFO << "Connector::retry - Retry connecting to " << serverAddr_ . toIpPort ()<< " in " << retryDelayMs_ << " milliseconds. " ;//利用定时器延迟运行loop_ -> runAfter ( retryDelayMs_ / 1000.0 ,boost :: bind ( & Connector :: startInLoop , shared_from_this ()));retryDelayMs_ = std :: min ( retryDelayMs_ * 2 , kMaxRetryDelayMs );}else{LOG_DEBUG << "do not connect" ;}}
2.Channel连接管道
void Channel :: update (){loop_ -> updateChannel ( this );}
在事件管理器的主循环中有这段代码负责对poller更新过的channel的事件处理void EventLoop :: updateChannel ( Channel * channel ){assert ( channel -> ownerLoop () == this );assertInLoopThread ();//掉一片那个poller进行更新poller_ -> updateChannel ( channel );}
for ( ChannelList :: iterator it = activeChannels_ . begin ();it != activeChannels_ . end (); ++ it ){currentActiveChannel_ = * it ;currentActiveChannel_ -> handleEvent ( pollReturnTime_ );}
3.EventLoop事件管理器
EventLoop衔接poller和channel,一旦poller产生新的事件,channel得到状态更新。
eventloop的多进程间的唤醒机制值得注意一下,muduo使用的是eventfd。eventfd创建之后提供一个write方法写入一个八字节的整形变量到内核计数器,read方法从内核计数器中读取计数,并置计数器为0,如果是blocking模式,read函数会一直阻塞。
eventloop的主循环方法loop,提供poll操作和channel的更新。我们来看一下源码:
eventloop也可以作为线程池来使用,提供如下方法void EventLoop::loop(){assert(!looping_);assertInLoopThread();looping_ = true;quit_ = false; // FIXME: what if someone calls quit() before loop() ?LOG_TRACE << "EventLoop " << this << " start looping";while (!quit_){activeChannels_.clear();//poll操作,具体是poll还是epoll方式可以通过配置MUDUO_USE_POLL变量来实现,返回的poll的时间pollReturnTime_ = poller_->poll(kPollTimeMs, &activeChannels_);++iteration_;//日志记录if (Logger::logLevel() <= Logger::TRACE){printActiveChannels();}// TODO sort channel by priorityeventHandling_ = true;//遍历poll操作中有事件变化的channel,对channel进行上一poll时刻的事件处理。for (ChannelList::iterator it = activeChannels_.begin();it != activeChannels_.end(); ++it){currentActiveChannel_ = *it;currentActiveChannel_->handleEvent(pollReturnTime_);}currentActiveChannel_ = NULL;eventHandling_ = false;//执行额外的函数doPendingFunctors();}LOG_TRACE << "EventLoop " << this << " stop looping";looping_ = false;}
再来看看EventLoop如何实现线程唤醒。//runInLoop 的行为是函调用在时间循环线程中,则立即执行,否则添加到事件队列void EventLoop::runInLoop(const Functor& cb){if (isInLoopThread()){cb();}else{queueInLoop(cb);}}//事件队列的实现void EventLoop::queueInLoop(const Functor& cb){{MutexLockGuard lock(mutex_);pendingFunctors_.push_back(cb);}if (!isInLoopThread() || callingPendingFunctors_){//唤醒线程wakeup();}}
wakeupChannel当有数据时回调handleRead函数void EventLoop::wakeup(){uint64_t one = 1;//向内核计数器写8字节整数ssize_t n = sockets::write(wakeupFd_, &one, sizeof one);if (n != sizeof one){LOG_ERROR << "EventLoop::wakeup() writes " << n << " bytes instead of 8";}}
线程被唤醒。void EventLoop::handleRead(){uint64_t one = 1;ssize_t n = sockets::read(wakeupFd_, &one, sizeof one);//如果buffer的长度小于8那么read会失败, 错误代码被设置成 EINVAL。if (n != sizeof one){LOG_ERROR << "EventLoop::handleRead() reads " << n << " bytes instead of 8";}}
EventLoop也提供了定时方法,实现用的是系统定时器,这部分我们放在TimeQueue中进行分析
TimerId EventLoop::runAt(const Timestamp& time, const TimerCallback& cb){return timerQueue_->addTimer(cb, time, 0.0);}TimerId EventLoop::runAfter(double delay, const TimerCallback& cb){Timestamp time(addTime(Timestamp::now(), delay));return runAt(time, cb);}TimerId EventLoop::runEvery(double interval, const TimerCallback& cb){Timestamp time(addTime(Timestamp::now(), interval));return timerQueue_->addTimer(cb, time, interval);}
4.EventLoopThread事件管理器线程
再来看startLoopvoid EventLoopThread::threadFunc(){EventLoop loop;if (callback_){//回调线程初始化成功回调函数。callback_(&loop);}{MutexLockGuard lock(mutex_);//初始化线程内成员变量loop_,为了防止startLoop跑在这段代码之前,需要利用信号量进行同步,通知startLoop线程初 //始化完毕loop_ = &loop;cond_.notify();}//启动EventLoop的循环方法,开始处理事件loop.loop();//assert(exiting_);loop_ = NULL;}
EventLoop* EventLoopThread::startLoop(){//启动线程assert(!thread_.started());thread_.start();{//确认线程启动完成MutexLockGuard lock(mutex_);while (loop_ == NULL){cond_.wait();}}return loop_;}
5.EventLoopThreadPool事件管理器线程池
EventLoopThread的实现相对简单,没有ThreadPool复杂,只是维护了一个EventLoopThread队列,同事提供遍历器方法。有兴趣可以自行查看源码。
6.TcpConnention链接会话管理
再来看看senInLoop的实现void TcpConnection::send(const void* data, size_t len){if (state_ == kConnected){//如果是在事件管理器中电泳则直接调用sendInloop发送数据if (loop_->isInLoopThread()){sendInLoop(data, len);}//否则放到事件管理器的pendingFunctors_队列中,调用EventLoop的runInLoop会对线程进行唤醒。else{string message(static_cast<const char*>(data), len);loop_->runInLoop(boost::bind(&TcpConnection::sendInLoop,this, // FIXMEmessage));}}}
顺次看下handleWrite对余下数据的处理void TcpConnection::sendInLoop(const void* data, size_t len){//必须在事件管理器线程内loop_->assertInLoopThread();ssize_t nwrote = 0;size_t remaining = len;bool faultError = false;if (state_ == kDisconnected){LOG_WARN << "disconnected, give up writing";return;}// if no thing in output queue, try writing directly//channel写空闲时可以进行写操作if (!channel_->isWriting() && outputBuffer_.readableBytes() == 0){nwrote = sockets::write(channel_->fd(), data, len);if (nwrote >= 0){remaining = len - nwrote;//写完成则回调写完成回调函数if (remaining == 0 && writeCompleteCallback_){//优先级比较低的使用queueInLooploop_->queueInLoop(boost::bind(writeCompleteCallback_, shared_from_this()));}}else // nwrote < 0{nwrote = 0;if (errno != EWOULDBLOCK){LOG_SYSERR << "TcpConnection::sendInLoop";if (errno == EPIPE || errno == ECONNRESET) // FIXME: any others?{faultError = true;}}}}assert(remaining <= len);if (!faultError && remaining > 0){size_t oldLen = outputBuffer_.readableBytes();//如果输出缓冲达到水位区,进行高水位处理if (oldLen + remaining >= highWaterMark_&& oldLen < highWaterMark_&& highWaterMarkCallback_){loop_->queueInLoop(boost::bind(highWaterMarkCallback_, shared_from_this(), oldLen + remaining));}//剩余的数据放在输出缓冲中在write回调handleWrite进行写操作。outputBuffer_.append(static_cast<const char*>(data)+nwrote, remaining);if (!channel_->isWriting()){channel_->enableWriting();}}}
其它部分的处理可以自行观看.void TcpConnection::handleWrite(){loop_->assertInLoopThread();//确认是连续写if (channel_->isWriting()){//向socket此写入剩余数据ssize_t n = sockets::write(channel_->fd(),outputBuffer_.peek(),outputBuffer_.readableBytes());if (n > 0){outputBuffer_.retrieve(n);//写缓存被完全写完执行完成回调操作,关闭管道写状态,否则,不关闭写状态,那么下次handleWrite需要对剩余的数据举 //行写操作,直到全部写完if (outputBuffer_.readableBytes() == 0){channel_->disableWriting();if (writeCompleteCallback_){loop_->queueInLoop(boost::bind(writeCompleteCallback_, shared_from_this()));}if (state_ == kDisconnecting){shutdownInLoop();}}}else{LOG_SYSERR << "TcpConnection::handleWrite";// if (state_ == kDisconnecting)// {// shutdownInLoop();// }}}else{LOG_TRACE << "Connection fd = " << channel_->fd()<< " is down, no more writing";}}
7.TcpServer服务器
TcpServer在构造函数中设置了acceptor的新连接回调void TcpServer::start(){if (started_.getAndSet(1) == 0){//启动线程池threadPool_->start(threadInitCallback_);assert(!acceptor_->listenning());//启动监听loop_->runInLoop(boost::bind(&Acceptor::listen, get_pointer(acceptor_)));}}
我们来看newConnection回调中如何处理新的连接acceptor_->setNewConnectionCallback(boost::bind(&TcpServer::newConnection, this, _1, _2));
void TcpServer::newConnection(int sockfd, const InetAddress& peerAddr){loop_->assertInLoopThread();//获取线程池下一个事件管理器线程,循环使用线程池内的线程能保证线程池中的线程都被使用EventLoop* ioLoop = threadPool_->getNextLoop();char buf[32];snprintf(buf, sizeof buf, ":%s#%d", hostport_.c_str(), nextConnId_);++nextConnId_;string connName = name_ + buf;LOG_INFO << "TcpServer::newConnection [" << name_<< "] - new connection [" << connName<< "] from " << peerAddr.toIpPort();InetAddress localAddr(sockets::getLocalAddr(sockfd));// FIXME poll with zero timeout to double confirm the new connection// FIXME use make_shared if necessary//根据local和peer的address建立TcpConnection管理一对连接TcpConnectionPtr conn(new TcpConnection(ioLoop,connName,sockfd,localAddr,peerAddr));//进行映射connections_[connName] = conn;conn->setConnectionCallback(connectionCallback_);conn->setMessageCallback(messageCallback_);conn->setWriteCompleteCallback(writeCompleteCallback_);handleClose时调用,用于从TcpServer维护的连接map中删除相迎的连接 //在TcpConnection中处理conn->setCloseCallback(boost::bind(&TcpServer::removeConnection, this, _1)); // FIXME: unsafeioLoop->runInLoop(boost::bind(&TcpConnection::connectEstablished, conn));}
{connector_->setNewConnectionCallback(boost::bind(&TcpClient::newConnection, this, _1));// FIXME setConnectFailedCallbackLOG_INFO << "TcpClient::TcpClient[" << name_<< "] - connector " << get_pointer(connector_);}
8.TcpClient客户端
看一下newConnection中如何进行处理的。{connector_->setNewConnectionCallback(boost::bind(&TcpClient::newConnection, this, _1));// FIXME setConnectFailedCallbackLOG_INFO << "TcpClient::TcpClient[" << name_<< "] - connector " << get_pointer(connector_);}
void TcpClient::newConnection(int sockfd){loop_->assertInLoopThread();InetAddress peerAddr(sockets::getPeerAddr(sockfd));char buf[32];snprintf(buf, sizeof buf, ":%s#%d", peerAddr.toIpPort().c_str(), nextConnId_);++nextConnId_;string connName = name_ + buf;InetAddress localAddr(sockets::getLocalAddr(sockfd));// FIXME poll with zero timeout to double confirm the new connection// FIXME use make_shared if necessary同样是通过TcpConnection来管理一对连接TcpConnectionPtr conn(new TcpConnection(loop_,connName,sockfd,localAddr,peerAddr));conn->setConnectionCallback(connectionCallback_);conn->setMessageCallback(messageCallback_);conn->setWriteCompleteCallback(writeCompleteCallback_);//设置连接关闭回调,通知TcpClient重置连接conn->setCloseCallback(boost::bind(&TcpClient::removeConnection, this, _1)); // FIXME: unsafe{MutexLockGuard lock(mutex_);connection_ = conn;}conn->connectEstablished();}
9.TimeQueue时间队列
handleRead回调//当时间到了,有写入发生,触发读回调timerfdChannel_ . setReadCallback (boost :: bind ( & TimerQueue :: handleRead , this ));// we are always reading the timerfd, we disarm it with timerfd_settime.timerfdChannel_ . enableReading ();
void TimerQueue :: handleRead (){loop_ -> assertInLoopThread ();Timestamp now ( Timestamp :: now ());readTimerfd ( timerfd_ , now );//获取超时队列std :: vector < Entry > expired = getExpired ( now );callingExpiredTimers_ = true ;cancelingTimers_ . clear ();// safe to callback outside critical section//执行定时回调函数for ( std :: vector < Entry >:: iterator it = expired . begin ();it != expired . end (); ++ it ){it -> second -> run ();}callingExpiredTimers_ = false ;reset ( expired , now );}
10.Poller
11.Socket