Handler Looper MessageQueue之Looper

3 Looper线程

Looper首先是一个在线程run方法中调用了Looper.prepare()Looper.loop()方法的线程。

3.1 looper的prepare

该方法位于/frameworks/base/core/java/android/os/Looper.java中。prepare方法会初始化一个looper线程,并且吧该looper存储到线程的ThreadLocal中(关于ThreadLocal,作用是保证每个线程都有其存储变量的独立副本。具有后续会分析。)首先会判断当前线程是否存储了该looper,保证一个线程只能调用prepare一次,也就是只有唯一一个looper对象。

static final ThreadLocal<Looper> sThreadLocal = new ThreadLocal<Looper>();
public static void prepare() {
    prepare(true);
}

private static void prepare(boolean quitAllowed) {
    if (sThreadLocal.get() != null) {
        throw new RuntimeException("Only one Looper may be created per thread");
    }
    sThreadLocal.set(new Looper(quitAllowed));
}

 private Looper(boolean quitAllowed) {
    mQueue = new MessageQueue(quitAllowed);
    mThread = Thread.currentThread();
}

prepare方法的参数quitAllowed表示当前消息looper是否允许退出。默认是true,表示允许退出。prepare最终调用了nativeInit(),这个jni方法实际位于frameworks/base/core/jni/android_os_MessageQueue.cpp中,主要做了两个工作:

1、创建一个NativeMessageQueue,并通过incStrong增加引用计数。

2、将其指针用reinterpret_cast转换为long并返回给java层,这部分指针转化的后续将有文章另外分析。

static jlong android_os_MessageQueue_nativeInit(JNIEnv* env, jclass clazz) {
    NativeMessageQueue* nativeMessageQueue = new NativeMessageQueue();
    if (!nativeMessageQueue) {
        jniThrowRuntimeException(env, "Unable to allocate native queue");
        return 0;
    }
    nativeMessageQueue->incStrong(env);
    return reinterpret_cast<jlong>(nativeMessageQueue);
}

NativeMessageQueue::NativeMessageQueue() : mPollEnv(NULL), mPollObj(NULL), mExceptionObj(NULL) {
    mLooper = Looper::getForThread();
    if (mLooper == NULL) {
     mLooper = new Looper(false);
     Looper::setForThread(mLooper);
     }
}

sp<Looper> Looper::getForThread() {
   int result = pthread_once(& gTLSOnce, initTLSKey);
    LOG_ALWAYS_FATAL_IF(result != 0, "pthread_once failed");
    return (Looper*)pthread_getspecific(gTLSKey);
}

NativeMessageQueue的构造方法中初始化了LoopergetForThread方法查询当前线程是否存在一个Looper,不存在才创建新的looper对象。Loopersystem/core/libutils/Looper.cpp. Looper在JNI层的创建过程如下

其中:eventfd这个函数会创建一个事件对象 (eventfd object), 用来实现进程(线程)间的等待/通知(wait/notify) 机制. 内核会为这个对象维护一个64位的计数器(uint64_t),并且使用第一个参数初始化这个计数器。调用这个函数就会返回一个新的文件描述符(event object)。第二个参数 EFD__NONBLOCK,设置对象为非阻塞状态,如果设置了这个标志,read读eventfd,并且计数器的值为0,就会返回一个 EAGAIN 错误(errno = EAGAIN),而不是一直堵塞在read调用当中。此处是为避免looper在唤醒的时候被阻塞。

通过epoll来监听事件mWakeEventFd的事件,EPOLLIN表示监听该类型的事件,也就是监听mWakeEventFd上有数据可读的事件。

Looper::Looper(bool allowNonCallbacks) :
    mAllowNonCallbacks(allowNonCallbacks), mSendingMessage(false),
    mPolling(false), mEpollFd(-1), mEpollRebuildRequired(false),
    mNextRequestSeq(0), mResponseIndex(0), mNextMessageUptime(LLONG_MAX) {
    mWakeEventFd = eventfd(0, EFD_NONBLOCK);
    LOG_ALWAYS_FATAL_IF(mWakeEventFd < 0, "Could not make wake event fd.  errno=%d", errno);
    AutoMutex _l(mLock);
    rebuildEpollLocked();
}

void Looper::rebuildEpollLocked() {
// Close old epoll instance if we have one.
if (mEpollFd >= 0) {
#if DEBUG_CALLBACKS
    ALOGD("%p ~ rebuildEpollLocked - rebuilding epoll set", this);
#endif
    close(mEpollFd);
}

// Allocate the new epoll instance and register the wake pipe.
mEpollFd = epoll_create(EPOLL_SIZE_HINT);
LOG_ALWAYS_FATAL_IF(mEpollFd < 0, "Could not create epoll instance.  errno=%d", errno);

struct epoll_event eventItem;
memset(& eventItem, 0, sizeof(epoll_event)); // zero out unused members of data field union
eventItem.events = EPOLLIN;
eventItem.data.fd = mWakeEventFd;
int result = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, mWakeEventFd, & eventItem);
LOG_ALWAYS_FATAL_IF(result != 0, "Could not add wake event fd to epoll instance.  errno=%d",
        errno);

for (size_t i = 0; i < mRequests.size(); i++) {
    const Request& request = mRequests.valueAt(i);
    struct epoll_event eventItem;
    request.initEventItem(&eventItem);
    int epollResult = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, request.fd, & eventItem);
    if (epollResult < 0) {
        ALOGE("Error adding epoll events for fd %d while rebuilding epoll set, errno=%d",
                request.fd, errno);
        }
    }
}

到此,looper的初始化创建就完成了。

另外,Looper还提供了一个prepareMainLooper方法,目的是初始化主线程ActivityThread的Looper。这里首先以不可以退出的方式调用了prepare,这样就再ThreadLocal中存储了当前的looper,然后再用全局变量sMainLooper保存了ThreadLocal的looper对象。这样做的目的是,其他线程如果需要获取主线程的Looper,可以直接使用Looper.getMainLooper()获取。

public static void prepareMainLooper() {
    prepare(false);
    synchronized (Looper.class) {
        if (sMainLooper != null) {
            throw new IllegalStateException("The main Looper has already been prepared.");
        }
        sMainLooper = myLooper();
    }
}

public static Looper getMainLooper() {
    synchronized (Looper.class) {
        return sMainLooper;
    }
}

3.2 looper发送消息(Native层)

定义于system/core/libutils/Looper.cpp

void Looper::sendMessage(const sp<MessageHandler>& handler, const Message& message) {
    nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
    sendMessageAtTime(now, handler, message);
}

void Looper::sendMessageDelayed(nsecs_t uptimeDelay, const sp<MessageHandler>& handler,
    const Message& message) {
    nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
    sendMessageAtTime(now + uptimeDelay, handler, message);
}

void Looper::sendMessageAtTime(nsecs_t uptime, const sp<MessageHandler>& handler, const Message& message) {
#if DEBUG_CALLBACKS
    ALOGD("%p ~ sendMessageAtTime - uptime=%" PRId64 ", handler=%p, what=%d",
        this, uptime, handler.get(), message.what);
#endif

    size_t i = 0;
    { // acquire lock
        AutoMutex _l(mLock);
        size_t messageCount = mMessageEnvelopes.size();
        while (i < messageCount && uptime >= mMessageEnvelopes.itemAt(i).uptime) {
        i += 1;
        }
    MessageEnvelope messageEnvelope(uptime, handler, message);
        mMessageEnvelopes.insertAt(messageEnvelope, i, 1);
    // Optimization: If the Looper is currently sending a message, then we can skip
    // the call to wake() because the next thing the Looper will do after processing
    // messages is to decide when the next wakeup time should be.  In fact, it does
    // not even matter whether this code is running on the Looper thread.
    if (mSendingMessage) {
        return;
        }
    } // release lock
    // Wake the poll loop only when we enqueue a new message at the head.
    if (i == 0) {
        wake();
    }
}


void Looper::wake() {
#if DEBUG_POLL_AND_WAKE
    ALOGD("%p ~ wake", this);
#endif

    uint64_t inc = 1;
    ssize_t nWrite = TEMP_FAILURE_RETRY(write(mWakeEventFd, &inc, sizeof(uint64_t)));
    if (nWrite != sizeof(uint64_t)) {
        if (errno != EAGAIN) {
            ALOGW("Could not write wake signal, errno=%d", errno);
        }
    }
}   

此部分消息处理机制和Java层的消息处理类似,首先在消息队列mMessageEnvelopes中查找消息插入位置,然后把消息和其handler封装进MessageEnvelope中,再插入到消息队列。最后如果写入的位置是消息头部则调用wake方法,往mWakeEventFd写入数据,出发epoll的监听,实现looper中继续处理消息。

3.3Looper中的removeMessage

looper jni层的removeMessage相关方法,定义于system/core/libutils/Looper.cpp

void Looper::removeMessages(const sp<MessageHandler>& handler) {
#if DEBUG_CALLBACKS
ALOGD("%p ~ removeMessages - handler=%p", this, handler.get());
#endif

{ // acquire lock
    AutoMutex _l(mLock);

    for (size_t i = mMessageEnvelopes.size(); i != 0; ) {
        const MessageEnvelope& messageEnvelope = mMessageEnvelopes.itemAt(--i);
        if (messageEnvelope.handler == handler) {
            mMessageEnvelopes.removeAt(i);
            }
        }
    } // release lock
}

void Looper::removeMessages(const sp<MessageHandler>& handler, int what) {
#if DEBUG_CALLBACKS
ALOGD("%p ~ removeMessages - handler=%p, what=%d", this, handler.get(), what);
#endif

{ // acquire lock
    AutoMutex _l(mLock);

    for (size_t i = mMessageEnvelopes.size(); i != 0; ) {
        const MessageEnvelope& messageEnvelope = mMessageEnvelopes.itemAt(--i);
        if (messageEnvelope.handler == handler
                && messageEnvelope.message.what == what) {
            mMessageEnvelopes.removeAt(i);
            }
        }
    } // release lock
}

3.4 Looper中的fd相关方法

addfd用来添加监听的描述符fd。参数ident表示要监听的事件的标识,event表示要监听的事件,callback表示时间发生的回调函数。把传入的参数保存在request中,然后以描述符为key,保存在mRequests中,通过epoll_ctl添加或者替换这个描述符的事件监听。

int Looper::addFd(int fd, int ident, int events, Looper_callbackFunc callback, void* data) {
    return addFd(fd, ident, events, callback ? new SimpleLooperCallback(callback) : NULL, data);
}

int Looper::addFd(int fd, int ident, int events, const sp<LooperCallback>& callback, void* data) {
#if DEBUG_CALLBACKS
    ALOGD("%p ~ addFd - fd=%d, ident=%d, events=0x%x, callback=%p, data=%p", this, fd, ident,
        events, callback.get(), data);
#endif
    if (!callback.get()) {
        if (! mAllowNonCallbacks) {
            ALOGE("Invalid attempt to set NULL callback but not allowed for this looper.");
            return -1;
        }

        if (ident < 0) {
            ALOGE("Invalid attempt to set NULL callback with ident < 0.");
            return -1;
        }
    } else {
        ident = POLL_CALLBACK;
    }

    { // acquire lock
        AutoMutex _l(mLock);

        Request request;
        request.fd = fd;
        request.ident = ident;
        request.events = events;
        request.seq = mNextRequestSeq++;
        request.callback = callback;
        request.data = data;
        if (mNextRequestSeq == -1) mNextRequestSeq = 0; // reserve sequence number -1

        struct epoll_event eventItem;
        request.initEventItem(&eventItem);

        ssize_t requestIndex = mRequests.indexOfKey(fd);
        if (requestIndex < 0) {
            int epollResult = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, fd, & eventItem);
        if (epollResult < 0) {
            ALOGE("Error adding epoll events for fd %d, errno=%d", fd, errno);
            return -1;
        }
        mRequests.add(fd, request);
    } else {
        int epollResult = epoll_ctl(mEpollFd, EPOLL_CTL_MOD, fd, & eventItem);
        if (epollResult < 0) {
            if (errno == ENOENT) {
                // Tolerate ENOENT because it means that an older file descriptor was
                // closed before its callback was unregistered and meanwhile a new
                // file descriptor with the same number has been created and is now
                // being registered for the first time.  This error may occur naturally
                // when a callback has the side-effect of closing the file descriptor
                // before returning and unregistering itself.  Callback sequence number
                // checks further ensure that the race is benign.
                //
                // Unfortunately due to kernel limitations we need to rebuild the epoll
                // set from scratch because it may contain an old file handle that we are
                // now unable to remove since its file descriptor is no longer valid.
                // No such problem would have occurred if we were using the poll system
                // call instead, but that approach carries others disadvantages.
#if DEBUG_CALLBACKS
                ALOGD("%p ~ addFd - EPOLL_CTL_MOD failed due to file descriptor "
                        "being recycled, falling back on EPOLL_CTL_ADD, errno=%d",
                        this, errno);
#endif
                epollResult = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, fd, & eventItem);
                if (epollResult < 0) {
                    ALOGE("Error modifying or adding epoll events for fd %d, errno=%d",
                            fd, errno);
                    return -1;
                }
                scheduleEpollRebuildLocked();
            } else {
                ALOGE("Error modifying epoll events for fd %d, errno=%d", fd, errno);
                return -1;
            }
        }
        mRequests.replaceValueAt(requestIndex, request);
    }
} // release lock
return 1;
}

int Looper::removeFd(int fd) {
    return removeFd(fd, -1);
}

int Looper::removeFd(int fd, int seq) {
#if DEBUG_CALLBACKS
    ALOGD("%p ~ removeFd - fd=%d, seq=%d", this, fd, seq);
#endif

    { // acquire lock
        AutoMutex _l(mLock);
        ssize_t requestIndex = mRequests.indexOfKey(fd);
        if (requestIndex < 0) {
            return 0;
        }

        // Check the sequence number if one was given.
        if (seq != -1 && mRequests.valueAt(requestIndex).seq != seq) {
    #if DEBUG_CALLBACKS
            ALOGD("%p ~ removeFd - sequence number mismatch, oldSeq=%d",
                    this, mRequests.valueAt(requestIndex).seq);
    #endif
            return 0;
        }

        // Always remove the FD from the request map even if an error occurs while
        // updating the epoll set so that we avoid accidentally leaking callbacks.
        mRequests.removeItemsAt(requestIndex);

        int epollResult = epoll_ctl(mEpollFd, EPOLL_CTL_DEL, fd, NULL);
        if (epollResult < 0) {
            if (seq != -1 && (errno == EBADF || errno == ENOENT)) {
                // Tolerate EBADF or ENOENT when the sequence number is known because it
                // means that the file descriptor was closed before its callback was
                // unregistered.  This error may occur naturally when a callback has the
                // side-effect of closing the file descriptor before returning and
                // unregistering itself.
                //
                // Unfortunately due to kernel limitations we need to rebuild the epoll
                // set from scratch because it may contain an old file handle that we are
                // now unable to remove since its file descriptor is no longer valid.
                // No such problem would have occurred if we were using the poll system
                // call instead, but that approach carries others disadvantages.
    #if DEBUG_CALLBACKS
                ALOGD("%p ~ removeFd - EPOLL_CTL_DEL failed due to file descriptor "
                        "being closed, errno=%d", this, errno);
    #endif
                scheduleEpollRebuildLocked();
            } else {
                // Some other error occurred.  This is really weird because it means
                // our list of callbacks got out of sync with the epoll set somehow.
                // We defensively rebuild the epoll set to avoid getting spurious
                // notifications with nowhere to go.
                ALOGE("Error removing epoll events for fd %d, errno=%d", fd, errno);
                scheduleEpollRebuildLocked();
                return -1;
            }
        }
    } // release lock
    return 1;
}

3.5 pollOnce

pollOnce方法在MessageQueue中会调用到,主要用于处理Response取出其中的ident,交给pollInner处理,在pollInner中会因为消息队列为空,或者消息处理时间未到达而阻塞。

mResponse在Looper::pushResponse(int events, const Request& request)中添加,mResponseIndex在looper对象被创建的时候初始化为0.

int Looper::pollOnce(int timeoutMillis, int* outFd, int* outEvents, void** outData) {
    int result = 0;
    for (;;) {
        while (mResponseIndex < mResponses.size()) {
            const Response& response = mResponses.itemAt(mResponseIndex++);
            int ident = response.request.ident;
            if (ident >= 0) {
                int fd = response.request.fd;
                int events = response.events;
                void* data = response.request.data;
#if DEBUG_POLL_AND_WAKE
                ALOGD("%p ~ pollOnce - returning signalled identifier %d: "
                        "fd=%d, events=0x%x, data=%p",
                        this, ident, fd, events, data);
#endif
                if (outFd != NULL) *outFd = fd;
                if (outEvents != NULL) *outEvents = events;
                if (outData != NULL) *outData = data;
                return ident;
            }
        }

        if (result != 0) {
#if DEBUG_POLL_AND_WAKE
            ALOGD("%p ~ pollOnce - returning result %d", this, result);
#endif
            if (outFd != NULL) *outFd = 0;
            if (outEvents != NULL) *outEvents = 0;
            if (outData != NULL) *outData = NULL;
            return result;
        }

        result = pollInner(timeoutMillis);
    }
}

pollInner使用了Linux内核的epoll机制,调用epoll_wait监听mEpollFd上的事件,若没有事件则等待,wait的时间则是由参数timeoutMillis决定。
在for循环中,首先是取出事件描述符,对描述符为mWakeEventFd的读事件调用awoken方法(也就是对handler或者native looper通过sendMessageAtTime发送过来的消息调用该方法,awoken方法中,会读取mWakeEventFd中的内容,也就是之前写入的“1”),然后对其他由addfd添加进来的描述符,取出其request信息,通过pushResponse存储到mResponse中

在Done后面处理具体的消息:首先是处理Native层存储在mMessageEnvelopes中消息,处理方法是获取mMessageEnvelope中存储的该消息的handler,调用handler的handlerMessage方法。Handler Looper MessageQueue之Handler中分析,并把当前最后一条消息的执行时间赋值给mNextMessageUptime。

接着处理mResponses中的消息。调用其Request中存储的callback回调方法handleEvent处理消息。

int Looper::pollInner(int timeoutMillis) {
#if DEBUG_POLL_AND_WAKE
    ALOGD("%p ~ pollOnce - waiting: timeoutMillis=%d", this, timeoutMillis);
#endif
        // Adjust the timeout based on when the next message is due.
    if (timeoutMillis != 0 && mNextMessageUptime != LLONG_MAX) {
        nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
        int messageTimeoutMillis = toMillisecondTimeoutDelay(now, mNextMessageUptime);
        if (messageTimeoutMillis >= 0
                && (timeoutMillis < 0 || messageTimeoutMillis < timeoutMillis)) {
            timeoutMillis = messageTimeoutMillis;
        }
#if DEBUG_POLL_AND_WAKE
        ALOGD("%p ~ pollOnce - next message in %" PRId64 "ns, adjusted timeout: timeoutMillis=%d",
                this, mNextMessageUptime - now, timeoutMillis);
#endif
    }

    // Poll.
    int result = POLL_WAKE;
    mResponses.clear();
    mResponseIndex = 0;

    // We are about to idle.
    mPolling = true;

    struct epoll_event eventItems[EPOLL_MAX_EVENTS];
    int eventCount = epoll_wait(mEpollFd, eventItems, EPOLL_MAX_EVENTS, timeoutMillis);

    // No longer idling.
    mPolling = false;

    // Acquire lock.
    mLock.lock();

    // Rebuild epoll set if needed.
    if (mEpollRebuildRequired) {
        mEpollRebuildRequired = false;
        rebuildEpollLocked();
        goto Done;
    }

    // Check for poll error.
    if (eventCount < 0) {
        if (errno == EINTR) {
            goto Done;
        }
        ALOGW("Poll failed with an unexpected error, errno=%d", errno);
        result = POLL_ERROR;
        goto Done;
    }

    // Check for poll timeout.
    if (eventCount == 0) {
#if DEBUG_POLL_AND_WAKE
        ALOGD("%p ~ pollOnce - timeout", this);
#endif
        result = POLL_TIMEOUT;
        goto Done;
    }

    // Handle all events.
#if DEBUG_POLL_AND_WAKE
    ALOGD("%p ~ pollOnce - handling events from %d fds", this, eventCount);
#endif

    for (int i = 0; i < eventCount; i++) {
        int fd = eventItems[i].data.fd;
        uint32_t epollEvents = eventItems[i].events;
        if (fd == mWakeEventFd) {
            if (epollEvents & EPOLLIN) {
                awoken();
            } else {
                ALOGW("Ignoring unexpected epoll events 0x%x on wake event fd.", epollEvents);
            }
        } else {
            ssize_t requestIndex = mRequests.indexOfKey(fd);
            if (requestIndex >= 0) {
                int events = 0;
                if (epollEvents & EPOLLIN) events |= EVENT_INPUT;
                if (epollEvents & EPOLLOUT) events |= EVENT_OUTPUT;
                if (epollEvents & EPOLLERR) events |= EVENT_ERROR;
                if (epollEvents & EPOLLHUP) events |= EVENT_HANGUP;
                pushResponse(events, mRequests.valueAt(requestIndex));
            } else {
                ALOGW("Ignoring unexpected epoll events 0x%x on fd %d that is "
                        "no longer registered.", epollEvents, fd);
            }
        }
    }
Done: ;

    // Invoke pending message callbacks.
    mNextMessageUptime = LLONG_MAX;
    while (mMessageEnvelopes.size() != 0) {
        nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
        const MessageEnvelope& messageEnvelope = mMessageEnvelopes.itemAt(0);
        if (messageEnvelope.uptime <= now) {
            // Remove the envelope from the list.
            // We keep a strong reference to the handler until the call to handleMessage
            // finishes.  Then we drop it so that the handler can be deleted *before*
            // we reacquire our lock.
            { // obtain handler
                sp<MessageHandler> handler = messageEnvelope.handler;
                Message message = messageEnvelope.message;
                mMessageEnvelopes.removeAt(0);
                mSendingMessage = true;
                mLock.unlock();

#if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS
                ALOGD("%p ~ pollOnce - sending message: handler=%p, what=%d",
                        this, handler.get(), message.what);
#endif
                handler->handleMessage(message);
            } // release handler

            mLock.lock();
            mSendingMessage = false;
            result = POLL_CALLBACK;
        } else {
            // The last message left at the head of the queue determines the next wakeup time.
            mNextMessageUptime = messageEnvelope.uptime;
            break;
        }
    }

    // Release lock.
    mLock.unlock();

    // Invoke all response callbacks.
    for (size_t i = 0; i < mResponses.size(); i++) {
        Response& response = mResponses.editItemAt(i);
        if (response.request.ident == POLL_CALLBACK) {
            int fd = response.request.fd;
            int events = response.events;
            void* data = response.request.data;
#if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS
            ALOGD("%p ~ pollOnce - invoking fd event callback %p: fd=%d, events=0x%x, data=%p",
                    this, response.request.callback.get(), fd, events, data);
#endif
            // Invoke the callback.  Note that the file descriptor may be closed by
            // the callback (and potentially even reused) before the function returns so
            // we need to be a little careful when removing the file descriptor afterwards.
            int callbackResult = response.request.callback->handleEvent(fd, events, data);
            if (callbackResult == 0) {
                removeFd(fd, response.request.seq);
            }

            // Clear the callback reference in the response structure promptly because we
            // will not clear the response vector itself until the next poll.
            response.request.callback.clear();
            result = POLL_CALLBACK;
        }
    }
    return result;
}

void Looper::awoken() {
#if DEBUG_POLL_AND_WAKE
    ALOGD("%p ~ awoken", this);
#endif

    uint64_t counter;
    TEMP_FAILURE_RETRY(read(mWakeEventFd, &counter, sizeof(uint64_t)));
}

3.6 Looper中的loop方法

该方法位于frameworks/base/core/java/android/os/Looper.java

 public static void loop {
    final Looper me = myLooper(); //获取ThreadLocal中存储的looper
    if (me == null) {   //若不存在looper,抛出异常。
        throw new RuntimeException("No Looper; Looper.prepare() wasn't called on this thread.");
    }
    final MessageQueue queue = me.mQueue;//looper中自带一个MessageQueue。

    // Make sure the identity of this thread is that of the local process,
    // and keep track of what that identity token actually is.
    Binder.clearCallingIdentity();
    final long ident = Binder.clearCallingIdentity();

    for (;;) {//消息循环的开始
        Message msg = queue.next(); //从MessageQueue中获取一条消息,可能会阻塞。
        if (msg == null) {  //如果没有消息则线程退出。
            // No message indicates that the message queue is quitting.
            return;
        }

        // This must be in a local variable, in case a UI event sets the logger
        Printer logging = me.mLogging;
        if (logging != null) {
            logging.println(">>>>> Dispatching to " + msg.target + " " +
                    msg.callback + ": " + msg.what);
        }

          //调用Message中的taget(也就是Handler),分发消息,这个target在构建Message对象的时候设置进来。
        msg.target.dispatchMessage(msg);
        if (logging != null) {
            logging.println("<<<<< Finished to " + msg.target + " " + msg.callback);
        }

        // Make sure that during the course of dispatching the
        // identity of the thread wasn't corrupted.
        final long newIdent = Binder.clearCallingIdentity();
        if (ident != newIdent) {
            Log.wtf(TAG, "Thread identity changed from 0x"
                    + Long.toHexString(ident) + " to 0x"
                    + Long.toHexString(newIdent) + " while dispatching to "
                    + msg.target.getClass().getName() + " "
                    + msg.callback + " what=" + msg.what);
        }

        msg.recycleUnchecked(); //回收消息,避免资源浪费
    }
}

public static @Nullable Looper myLooper() {
    return sThreadLocal.get();
}

首先,使用myLooper方法,获取当前线程中存储的sThreadLocal中的looper的对象,并且检查是否初始化。然后从looper中获取MessageQueue,在无线循环中轮训messageQueue,处理消息

调用Message.next方法用于循环监听获取消息,其可能导致阻塞,这部分将在Handler Looper MessageQueue之MessageQueue中分析
调用Message.target.dispatchMessage方法用于分发消息到handler,有handler处理消息的响应,这部分将在Handler Looper MessageQueue之Handler中分析
调用Message.recycleUnchecked方法用于回收消息并更新到消息池,这部分将在Handler Looper MessageQueue之Message中分析
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值