本文通过一个具体的例子来说明Binder机制中Server的启动过程。我们知道,在Android系统中,提供了多媒体播放的功能,这个功能是以服务的形式来提供的。这里,我们就通过分析MediaPlayerService的实现来了解Media Server的启动过程。
## MediaServer全面解析 ##
首先,看看MediaPlayerService是如何启动的。启动MediaPlayerService的代码位于frameworks/base/media/mediaserver/main_mediaserver.cpp文件中:
int main(int argc, char** argv)
{
//1.获得一个ProcessState实例
sp<ProcessState> proc(ProcessState::self());
//2.MS作为ServiceManager的客户端,需要向ServiceManager注册服务
//调用defaultServiceManager,得到一个IServiceManager
sp<IServiceManager> sm = defaultServiceManager();
LOGI("ServiceManager: %p", sm.get());
AudioFlinger::instantiate();
//3.初始化MediaPlayerService服务,我们将以它作为主切入点
MediaPlayerService::instantiate();
CameraService::instantiate();
AudioPolicyService::instantiate();
//4.看名字就跟通信相关
ProcessState::self()->startThreadPool();
//5.
IPCThreadState::self()->joinThreadPool();
}
这里我们不关注AudioFlinger和CameraService相关的代码,先看下面这句代码:
## 1.1 ProcessState ##
`sp<ProcessState> proc(ProcessState::self());`
这句代码的作用是通过ProcessState::self()调用创建一个ProcessState实例。ProcessState::self()是ProcessState类的一个静态成员变量,定义在frameworks/base/libs/binder/ProcessState.cpp文件中:
sp<ProcessState> ProcessState::self()
{
if (gProcess != NULL) return gProcess;
AutoMutex _l(gProcessMutex);
if (gProcess == NULL) gProcess = new ProcessState;
return gProcess;
}
再来看ProcessState的构造函数:
ProcessState::ProcessState()
//注意open_driver
: mDriverFD(open_driver())
//映射内存的起始地址
, mVMStart(MAP_FAILED)
, mManagesContexts(false)
, mBinderContextCheckFunc(NULL)
, mBinderContextUserData(NULL)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
{
if (mDriverFD >= 0) {
// XXX Ideally, there should be a specific define for whether we
// have mmap (or whether we could possibly have the kernel module
// availabla).
#if !defined(HAVE_WIN32_IPC)
// mmap the binder, providing a chunk of virtual address space to receive transactions.
//Binder驱动会分配到一块内存来接受数据
mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
if (mVMStart == MAP_FAILED) {
// *sigh*
LOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
close(mDriverFD);
mDriverFD = -1;
}
#else
mDriverFD = -1;
#endif
}
if (mDriverFD < 0) {
// Need to run without the driver, starting our own thread pool.
}
}
这个函数有两个关键地方,一是通过open_driver函数打开Binder设备文件/dev/binder,并将打开设备文件描述符保存在成员变量mDriverFD中;二是通过mmap来把设备文件/dev/binder映射到内存中。
先看open_driver函数的实现,这个函数同样位于frameworks/base/libs/binder/ProcessState.cpp文件中:
static int open_driver()
{
if (gSingleProcess) {
return -1;
}
int fd = open("/dev/binder", O_RDWR);
if (fd >= 0) {
fcntl(fd, F_SETFD, FD_CLOEXEC);
int vers;
#if defined(HAVE_ANDROID_OS)
status_t result = ioctl(fd, BINDER_VERSION, &vers);
#else
status_t result = -1;
errno = EPERM;
#endif
if (result == -1) {
LOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
close(fd);
fd = -1;
}
if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
LOGE("Binder driver protocol does not match user space protocol!");
close(fd);
fd = -1;
}
#if defined(HAVE_ANDROID_OS)
size_t maxThreads = 15;
//用ioctl方式告诉Binder驱动,这个FD支持的最大线程数是15个
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
if (result == -1) {
LOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
}
#endif
} else {
LOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));
}
return fd;
}
### 至此,ProcessState函数分析完了,它主要做了如下: ###
1. 打开/dev/binder设备,这就相当于与内核的Binder驱动有了交互的通道
2. 对返回的fd使用mmap,Binder驱动会分配到一块内存来接受数据
3. 由于ProcessState具有唯一性,因此一个进程只打开设备一次
## 2 defaultServiceManager ##
sp<IServiceManager> defaultServiceManager();
实现在frameworks/base/libs/binder/IServiceManager.cpp文件中:
sp<IServiceManager> defaultServiceManager()
{
if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
{
AutoMutex _l(gDefaultServiceManagerLock);
if (gDefaultServiceManager == NULL) {
//真正的gDefaultServiceManager是在这里创建的
gDefaultServiceManager = interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
}
}
return gDefaultServiceManager;
}
ProcessState::self,肯定返回的是刚才创建的gProcess,然后调用它的getContextObject,注意,传进去的是NULL,即0
回到ProcessState类:
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller)
{
//该函数根据打开设备是否成功来判断是否支持process,
if (supportsProcesses()) {
//在真机上肯定走这个
return getStrongProxyForHandle(0);
}
}
进入到getStrongProxyForHandle函数:
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
...
if (e != NULL) {
IBinder* b = e->binder; -->第一次进来,肯定为空
if (b == NULL || !e->refs->attemptIncWeak(this)) {
b = new BpBinder(handle); --->创建一个BpBinder
e->binder = b;
result = b;
}
....
}
return result; //返回刚才创建的BpBinder。
}
## 2.1 BpBinder ##
上面提及到BpBinder是什么呢?这里有必要先来介绍下它的孪生兄弟BBinder。
BpBinder和BBinder都是Binder通信相关的代表,他们都是从IBinder类派生而来的。
BpBinder位置在framework\base\libs\binder\BpBinder.cpp中。
BpBinder::BpBinder(int32_t handle)
: mHandle(handle) //注意,接上述内容,这里调用的时候传入的是0
, mAlive(1)
, mObitsSent(0)
, mObituaries(NULL)
{
//重要对象IPCThreadState,稍后详细讲解
IPCThreadState::self()->incWeakHandle(handle);
}
记住上面,我们是从这个函数开始分析的:
gDefaultServiceManager = interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
现在这个函数调用将变成如下所示:
gDefaultServiceManager = interface_cast<IServiceManager>(new BpBinder(0));
来看看interface_cast函数的实现,代码位于framework/base/include/binder/IInterface.h:
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
return INTERFACE::asInterface(obj);
}
所以,上面等价于:
inline sp<IServiceManager> interface_cast(const sp<IBinder>& obj)
{
return IServiceManager::asInterface(obj);
}
## 2.2 IServiceManager ##
来看看IServiceManager::asInterface()函数,它是在framework/base/libs/binder/IServiceManager.cpp中实现:
android::String16 IServiceManager::descriptor(“android.os.IServiceManager”);
const android::String16& IServiceManager::getInterfaceDescriptor() const
{
return IServiceManager::descriptor;
android.os.IServiceManager
}
android::sp<IServiceManager>
//实现asInterface函数
IServiceManager::asInterface(
const android::sp<android::IBinder>& obj)
{
android::sp<IServiceManager> intr;
if (obj != NULL) {
intr = static_cast<IServiceManager *>(
obj->queryLocalInterface(IServiceManager::descriptor).get());
if (intr == NULL) {
//注意,obj是我们刚才创建的那个BpBinder(0)
intr = new BpServiceManager(obj);
}
}
return intr;
}
IServiceManager::IServiceManager () { }
IServiceManager::~ IServiceManager() { }
到这里,终于明白了,interface_cast最终是利用BpBinder对象最为参数新建了一个BpServiceManager对象。
## 2.3 BpServiceManager ##
下面来看看BpServiceManager类,它是定义在IServiceManager.cpp中的一个内部类:
class BpServiceManager : public BpInterface<IServiceManager>
//这种继承方式,表示同时继承BpInterface和IServiceManager,这样IServiceManger的addService函数必然在这个类中实现
{
public:
//这里传入的impl就是new BpBinder(0)
BpServiceManager(const sp<IBinder>& impl)
: BpInterface<IServiceManager>(impl)
{
}
virtual status_t addService(const String16& name, const sp<IBinder>& service)
{
......
}
}
基类BpInterface的构造函数
inline BpInterface< IServiceManager >::BpInterface(const sp<IBinder>& remote)
: BpRefBase(remote)
{
}
BpRefBase的构造函数
BpRefBase::BpRefBase(const sp<IBinder>& o)
//mRemote就是刚才的BpBinder(0)
: mRemote(o.get()), mRefs(NULL), mState(0)
{
extendObjectLifetime(OBJECT_LIFETIME_WEAK);
if (mRemote) {
mRemote->incStrong(this);
mRefs = mRemote->createWeak(this);
}
}
### 至此,defaultServiceManager分析完了,可以得到如下几个关键信息: ###
1. 有一个BpBinder对象,它的handle值是0。
2. 有一个BpServiceManager对象,它的mRemote值是BpBinder。
3. BpServiceManager对象实现了IServiceManager的业务函数,现在又有BpBinder作为通信的代表,接下来的工作就清晰了。
## 3 MediaPlayerService ##
下面看看MediaPlayerService又做了些什么,代码位于framework\base\media\libmediaplayerservice\libMediaPlayerService.cpp:
void MediaPlayerService::instantiate() {
defaultServiceManager()->addService(
//传进去服务的名字,传进去new出来的对象
String16("media.player"), new MediaPlayerService());
}
据前面的分析可知,defaultServiceManager()实际返回的是BpServiceManager,它实现了IServiceManager接口,如下所示:
virtual status_t addService(const String16& name, const sp<IBinder>& service)
{
Parcel data, reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
//再把新service的名字写进去 叫media.player
data.writeString16(name);
//把MediaPlayerService写到命令中
data.writeStrongBinder(service);
//remote()返回的是mRemote,也就是BpBinder对象。
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readInt32() : err;
}
这段代码很容易理解,addService函数中把请求数据打包成data后,传给了BpBinder的transact函数,这就把通信的工作交给了BpBinder。
下面就来分析BpBinder的transact函数,代码位于framework\base\libs\binder\BpBinder.cpp:
status_t BpBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
//又绕回去了,调用IPCThreadState的transact。
//注意啊,这里的mHandle为0,code是ADD_SERVICE_TRANSACTION,data是命令包
//reply是回复包,flags=0
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
好吧,这里IPCTheadState又出现了,源码位于frameworks/native/libs/binder/IPCThreadState.cpp,现在来看下它的self()函数:
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS) {//第一次进来为false
restart:
const pthread_key_t k = gTLS;
//TLS是Thread Local Storage的意思,不懂得自己去google下它的作用吧。这里只需要
//知道这种空间每个线程有一个,而且线程间不共享这些空间,好处是?我就不用去搞什么
//同步了。在这个线程,我就用这个线程的东西,反正别的线程获取不到其他线程TLS中的数据。===》这句话有漏洞,钻牛角尖的明白大概意思就可以了。
//从线程本地存储空间中获得保存在其中的IPCThreadState对象
//这段代码写法很晦涩,看见没,只有pthread_getspecific,那么肯定有地方调用pthread_setspecific。
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
return new IPCThreadState;//new一个对象,
}
if (gShutdown) return NULL;
pthread_mutex_lock(&gTLSMutex);
if (!gHaveTLS) {
if (pthread_key_create(&gTLS, threadDestructor) != 0) {
pthread_mutex_unlock(&gTLSMutex);
return NULL;
}
gHaveTLS = true;
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}
再来看看它的构造函数:
IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()), mMyThreadId(androidGetTid())
{
pthread_setspecific(gTLS, this);
clearCaller();
//mIn,mOut是两个Parcel,把它看成是发送和接收命令的缓冲区即可
mIn.setDataCapacity(256);
mOut.setDataCapacity(256);
}
刚才我们看到BpBinder的transact函数调用了IPCThreadState的transact函数,现在来看下具体的实现:
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
status_t err = data.errorCheck();
flags |= TF_ACCEPT_FDS;
if (err == NO_ERROR) {
//调用writeTransactionData 发送数据
//BC_TRANSACTION,它是应用程序向binder设备发送消息的消息码。一般binder驱动向应用程序回复消息的消息码已BR开头
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
if ((flags & TF_ONE_WAY) == 0) {
if (reply) {
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
//等到Binder驱动的回复
err = waitForResponse(NULL, NULL);
return err;
}
这个流程看起来应该很清晰了,发数据然后等待结果。再来看看具体的发送数据流程writeTransactionData函数:
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;
tr.target.handle = handle;
tr.code = code;
tr.flags = binderFlags;
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);
tr.data.ptr.offsets = data.ipcObjects();
}
....
//上面把命令数据封装成binder_transaction_data,然后写到mOut中,mOut是命令的缓冲区,也是一个Parcel
mOut.writeInt32(cmd);
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}
注意,上面的函数只是仅仅把请求写到了Parcel中,而没有直接发出去。其实发送和请求回复的实现在waitForResponse函数中:
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
int32_t cmd;
int32_t err;
while (1) {
//talkWithDriver,哈哈,应该是这里了
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
//看见没?这里开始操作mIn了,看来talkWithDriver中把mOut发出去,然后从driver中读到数据放到mIn中了。
cmd = mIn.readInt32();
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
if (!reply && !acquireResult) goto finish;
break;
......
default:
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
return err;
}
来看talkWithDriver的实现:
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
//binder_write_read是用来与Binder设备交互数据的结构体
binder_write_read bwr;
status_t err;
......
do {
//用ioctl来读写
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
} while (err == -EINTR);
//到这里,回复数据就在bwr中了,bmr接收回复数据的buffer就是mIn提供的
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);
mIn.setDataPosition(0);
}
return NO_ERROR;
}
OK,这个时候已经请求数据,如果马上收到了回复,该怎么处理呢?来看看关键的executeCommand函数:
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;
switch ((uint32_t)cmd) {
......
case BR_TRANSACTION:
{
binder_transaction_data tr;
result = mIn.read(&tr, sizeof(tr));
ALOG_ASSERT(result == NO_ERROR,
"Not enough command data for brTRANSACTION");
if (result != NO_ERROR) break;
Parcel buffer;
buffer.ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);
const pid_t origPid = mCallingPid;
const uid_t origUid = mCallingUid;
const int32_t origStrictModePolicy = mStrictModePolicy;
const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;
mCallingPid = tr.sender_pid;
mCallingUid = tr.sender_euid;
mLastTransactionBinderFlags = tr.flags;
int curPrio = getpriority(PRIO_PROCESS, mMyThreadId);
......
//ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid);
Parcel reply;
status_t error;
......
if (tr.target.ptr) {
// We only have a weak reference on the target object, so we must first try to
// safely acquire a strong reference before doing anything else with it.
if (reinterpret_cast<RefBase::weakref_type*>(
tr.target.ptr)->attemptIncStrong(this)) {
//关键的BBinder,它是由BnBinder派生而来的。调用BBinder的transact函数。
error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
&reply, tr.flags);
reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
} else {
error = UNKNOWN_TRANSACTION;
}
} else {
error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
}
......
}
break;
//收到binder驱动发来的进程死亡消息,看来只有Bp端能收到了
case BR_DEAD_BINDER:
{
BpBinder *proxy = (BpBinder*)mIn.readPointer();
ALOGD("[DN #5] BR_DEAD_BINDER cookie %p", proxy);
proxy->sendObituary();
mOut.writeInt32(BC_DEAD_BINDER_DONE);
mOut.writePointer((uintptr_t)proxy);
} break;
case BR_CLEAR_DEATH_NOTIFICATION_DONE:
{
BpBinder *proxy = (BpBinder*)mIn.readPointer();
proxy->getWeakRefs()->decWeak(proxy);
} break;
......
default:
ALOGD("*** BAD COMMAND %d received from Binder driver\n", cmd);
printf("*** BAD COMMAND %d received from Binder driver\n", cmd);
result = UNKNOWN_ERROR;
break;
}
if (result != NO_ERROR) {
ALOGD("EXECMD cmd %d return %d\n", cmd, (int32_t)result);
mLastError = result;
}
return result;
}
在看BBinder的transact函数
status_t BBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
//
就是调用自己的onTransact函数嘛
err = onTransact(code, data, reply, flags);
return err;
}
由于MediaPlayerService是继承BnMediaPlayerService的,而BnMediaPlayerSerice又是从BBinder派生而来,所以最终会调用到BnMediaPlayerService的onTransact()函数:
status_t BnMediaPlayerService::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// BnMediaPlayerService从BBinder和IMediaPlayerService派生。看到下面的switch没?所有IMediaPlayerService提供的函数都通过命令类型来区分
switch(code) {
case CREATE_URL: {
CHECK_INTERFACE(IMediaPlayerService, data, reply);
create是一个虚函数,由MediaPlayerService来实现!!
sp<IMediaPlayer> player = create(
pid, client, url, numHeaders > 0 ? &headers : NULL);
reply->writeStrongBinder(player->asBinder());
return NO_ERROR;
} break;
{
### 到这里,MediaPlayerService的注册过程分析完毕,流程相对还是比较清晰的 ###
## 4 startThreadPool ##
这部分比较简单,直接看代码吧:
void ProcessState::startThreadPool()
{
...
spawnPooledThread(true);
}
void ProcessState::spawnPooledThread(bool isMain)
{
sp<Thread> t = new PoolThread(isMain);isMain是TRUE
//创建线程池,然后run起来,和java的Thread何其像也。
t->run(buf);
}
class PoolThread : public Thread
{
......
virtual bool PoolThread ::threadLoop()
{
virtual bool PoolThread ::threadLoop()
{
......
//又创建了一个新的线程。
IPCThreadState::self()->joinThreadPool(mIsMain);
return false;
}
## 5 joinThreadPool ##
来具体看看这个joinThreadPool做了什么呢:
void IPCThreadState::joinThreadPool(bool isMain)
{
mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
status_t result;
do {
int32_t cmd;
//发送命令,读取请求
result = talkWithDriver();
//处理消息
result = executeCommand(cmd);
......
}
} while (result != -ECONNREFUSED && result != -EBADF);
mOut.writeInt32(BC_EXIT_LOOPER);
talkWithDriver(false);
}
### 来总结一下4.5: ###
1. 有两个线程在talkWithDriver。
2. 首先startTheadPool中新启动的线程通过joinThreadPool读取Binder设备,看是否请求。
3. 主线程也调用joinTheadPool读取binder设备,查看是否有请求。
4. 由此看来,binder设备是支持多线程操作的,前面也有提到最大的支持数默认为15个对吧!
## 服务总管ServiceManager解析 ##
前面提到过,defaultServiceManager返回的是一个BpServiceManager,通过它可以把命令请求发送到binder设备,而且handle的值为0。那么,系统的另外一端肯定有个接收命令的,那又是谁呢?查阅源码并没有发现BnServiceManager不存在,但确实有一个程序完成了BnServiceManager的工作,它就是ServiceManager,代码位置在framework/base/cmds/servicemanger.c中。
### 1 ServiceManager的入口函数 ###
int main(int argc, char **argv)
{
struct binder_state *bs;
void *svcmgr = BINDER_SERVICE_MANAGER;
bs = binder_open(128*1024);//1.应该是打开binder设备吧
binder_become_context_manager(bs) //2.成为manager
svcmgr_handle = svcmgr;
binder_loop(bs, svcmgr_handler);//3.处理BpServiceManager发过来的命令
}
### 1.1 打开binder设备 ###
//是不是似曾相识,跟之前ProcessState中看到的一样。打开binder设备,映射内存
struct binder_state *binder_open(unsigned mapsize)
{
struct binder_state *bs;
bs = malloc(sizeof(*bs));
....
bs->fd = open("/dev/binder", O_RDWR);//果然如此
....
bs->mapsize = mapsize;
bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
}
### 1.2 成为老大 ###
int binder_become_context_manager(struct binder_state *bs)
{
return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);//把自己设为MANAGER
}
### 1.3 处理BpServiceManager发过来的请求 ###
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
readbuf[0] = BC_ENTER_LOOPER;
binder_write(bs, readbuf, sizeof(unsigned));
for (;;) {//果然是循环
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (unsigned) readbuf;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
//哈哈,收到请求了,解析命令
res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
}
注意这个func是binder_loop中传递过来的svcmgr_handler,它是一个类似于handleMessage处理各种各样的命令,定义在ServceManager.c中:
int svcmgr_handler(struct binder_state *bs, struct binder_txn *txn, struct binder_io *msg, struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
unsigned len;
void *ptr;
s = bio_get_string16(msg, &len);
switch(txn->code) {
//对应addSerice请求
case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len);
ptr = bio_get_ref(msg);
if (do_add_service(bs, s, len, ptr, txn->sender_euid))
return -1;
break;
...
}
其中,do_add_service真正添加MediaPlayerService信息
int do_add_service(struct binder_state *bs, uint16_t *s, unsigned len, void *ptr, unsigned uid)
{
struct svcinfo *si;
si = find_svc(s, len); //s是一个list
si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
si->ptr = ptr;
si->len = len;
memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
si->name[len] = '\0';
si->death.func = svcinfo_death;
si->death.ptr = si;
si->next = svclist;
svclist = si; //看见没,这个svclist是一个列表,保存了当前注册到ServiceManager中的信息
binder_acquire(bs, ptr);
//当服务进程退出后,ServiceManager会做一些清理工作。列入释放上面malloc出来的si。
binder_link_to_death(bs, ptr, &si->death);
return 0;
}
### ServiceManager存在的意义 ###
Android系统中Service信息都是先add到ServiceManager中,由ServiceManager来集中管理,这样就可以查询当前系统有哪些服务。而且,Android系统中某个服务例如MediaPlayerService的客户端想要和MediaPlayerService通讯的话,必须先向ServiceManager查询MediaPlayerService的信息,然后通过ServiceManager返回的东西再来和MediaPlayerService交互,一定程度上缓解了server端的压力。
## MediaPlayerService和它的客户端 ##
前面讲解了ServiceManager和它的客户端MediaPlayerService,那我们现在已MediaPlayerService和它的client来进行分析分析吧。我们做的,一个client想要获取某个服务的信息,就必须要和ServieManager交互,通过调用getService函数来获取对应的service信息。下面请看来源于IMediaDeathNotifier.cpp中的例子getMediaPlayerService(),如下所示:
IMediaDeathNotifier::getMediaPlayerService()
{
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;
do {
//向SM查询对应服务的信息,返回Bpbinder
binder = sm->getService(String16("media.player"));
if (binder != 0) {
break;
}
//如果ServiceManager上还没有注册对应的服务,则需要等待,知道服务注册完毕
usleep(500000); // 0.5s
} while(true);
//通过interface_cast,将这个binder转化成BpMediaPlayerService
//注意,这个binder只是用来和binder设备通讯用的。
sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);
return sMediaPlayerService;
}
有了这个BpMediaPlayerService,就可以使用IMediaPlayerService提供的业务逻辑函数了。要明白的是,调用的这些函数都将把请求数据打包发送给Binder驱动,并由BpBinder中的handle值找到对应端的处理者来处理,这中间的过程主要就是通信层收到请求,然后递交给业务层去处理。