前言:说起Binder相信很多开发者都了解过,但是又有多少人真正去底层了解过他呢?不过相信很多开发者也是想去深入了解他的,但是鉴于Binder的底层都是c/c++,对于纯java的开发者来说,阅读确实有一定难度。那么对于本篇文章,如果有c/c++的功底的同学可以一直跟着我的思路走下去,那么对于没有c/c++经验的同学来说,我觉得也阔以跟着我的思路来,碰到不懂的native层的代码,可以去google一下,也顺便增长增长自己在c/c++这块的知识点。那么对于本篇文章,文章可能会比较长,所以我会在每分析完一个模块会做一个简单总结。下面正文开始。
Binder的架构是什么样子的?
相信看到这篇文章的同学或多或少都对Binder架构有一定了解,他是一种C/S架构设计的,说起C/S,相信大家都不陌生吧,应该会第一时间想起我们的Http请求。既然大家对这个熟悉就很好理解了。在这个架构中有4个比较重要的概念,他们分别是Client、Server、ServiceManager和Binder驱动。其中ServiceManager就相当于DNS(域名系统),Client就是我们的客户端,Server就是我们的服务端,在某种概念上来说,其实ServiceManager也是相当于服务端,但是这里我们还是分开来讲,以免大家理解错误。
Client端与Server端如何通信呢?
既然有了上面的概念认知,现在我们来说说C/S之间如何通信的。一开始在通信建立的时候,C端是不能直接与S端直接通信的,为什么呢?因为一开始C端并不知道S端是什么样子呀,只知道S端的一个名字,所以我们想到C端应该可以根据这个名字去哪里查到这个S端。确实是这样的,ServiceManager就是供我们去查询S端的。因为在整个系统启动的时候,就已经将C端需要用到的服务都注册到了ServiceManager中去了。所以他们之间通信的步骤是:
1>系统启动的时候向ServiceManager中注册服务。
2>当C端需要与S端通信的时候,C端去ServiceManager中去查询S的服务。
3>查询到S端的服务后,C端就可以与S端直接通信。
具体案例之MediaPlayerService
上面说了那么多概念,那么现在我们就具体的案例来进行剖析,我们就以MediaPlayerService来分析。
我们首先来看看MediaServer的入口函数,因为与Media相关的服务都是在这里向ServiceManager注册的。
int main(int argc __unused, char** argv)
{
// all other services
if (doLog) {
prctl(PR_SET_PDEATHSIG, SIGKILL); // if parent media.log dies before me, kill me also
setpgid(0, 0); // but if I die first, don't kill my parent
}
InitializeIcuOrDie();
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm = defaultServiceManager();
ALOGI("ServiceManager: %p", sm.get());
AudioFlinger::instantiate();
MediaPlayerService::instantiate();
ResourceManagerService::instantiate();
CameraService::instantiate();
AudioPolicyService::instantiate();
SoundTriggerHwService::instantiate();
RadioService::instantiate();
registerExtensions();
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}
首先执行ProcessState::self()语句来实例化ProcessState,并且它是一个单例(在他的构造函数中执行了很多逻辑,比如打开"dev/binder"结点)。接着执行defaultServiceManager方法,我们跟进去看看:
sp<IServiceManager> defaultServiceManager()
{
if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
{
AutoMutex _l(gDefaultServiceManagerLock);
while (gDefaultServiceManager == NULL) {
gDefaultServiceManager = interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
if (gDefaultServiceManager == NULL)
sleep(1);
}
}
return gDefaultServiceManager;
}
在该方法中,首先判断gDefaultServiceManager是否为空,如果不为空就直接返回,如果为空那么就通过interface_cast去创建。暂时先不处理,我们先看ProcessState::self()->getContextObject(NULL)相当于执行ProcessState::self()->getContextObject(0),而这个0,就是后续通过进程间通信去查找本地ServiceManager的句柄。由前面分析可知ProcessState::self()返回的是ProcessState实例,所以我们去看看ProcessSstate的getContextObject方法:
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
return getStrongProxyForHandle(0);
}
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
// We need to create a new BpBinder if there isn't currently one, OR we
// are unable to acquire a weak reference on this current one. See comment
// in getWeakProxyForHandle() for more info about this.
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
if (handle == 0) {
// Special case for context manager...
// The context manager is the only object for which we create
// a BpBinder proxy without already holding a reference.
// Perform a dummy transaction to ensure the context manager
// is registered before we create the first local reference
// to it (which will occur when creating the BpBinder).
// If a local reference is created for the BpBinder when the
// context manager is not present, the driver will fail to
// provide a reference to the context manager, but the
// driver API does not return status.
//
// Note that this is not race-free if the context manager
// dies while this code runs.
//
// TODO: add a driver API to wait for context manager, or
// stop special casing handle 0 for context manager and add
// a driver API to get a handle to the context manager with
// proper reference counting.
Parcel data;
status_t status = IPCThreadState::self()->transact(
0, IBinder::PING_TRANSACTION, data, NULL, 0);
if (status == DEAD_OBJECT)
return NULL;
}
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
// This little bit of nastyness is to allow us to add a primary
// reference to the remote proxy when this team doesn't have one
// but another team is sending the handle to us.
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
由执行条件以及注释可知,最终会返回一个BpBinder,并且持有0这个句柄。并且在创建BpBinder的时候在他的构造函数中创建了IPCThreadState实例,他就是真正与ServiceManager通信的关键。我们后面会一直看他的身影。在这里面我们还看到了在transact中通过ping来尝试与Binder设备通信,如果OK,才继续进行下去。这里我们显然考虑通信OK的情况,因为如果返回的状态是DEAD_OBJECT的,那么也不会与服务端进行后续的通信了。
还记得在上面我们有一个interface_cast没有分析对吧,我们现在来看看他的具体实现:
gDefaultServiceManager = interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
return INTERFACE::asInterface(obj);
}
这里我将他的调用与具体实现都贴出代码了,也许对于那些C++比较陌生的童鞋来说,可能不知道这个实现的意思,那么我在这里就解释下,这是一个模板内联函数,什么意思呢?我们将它拆开来讲,首先说说模板,有点类似于java中的泛型方法吧(比喻可能不太恰当,但是便于理解),那么内联又是什么意思呢?这里给出一个官方解释:它可以像普通函数一样被调用,但是在调用时并不通过函数调用的机制而是通过将函数体直接插入调用处来实现的,这样可以大大减少由函数调用带来的开销,从而提高程序的运行效率。所以在这里我们将其转换就是如下代码:
inline sp<IServiceManager> interface_cast(const sp<IBinder>& obj)
{
return IServiceManager::asInterface(obj);
}
那么现在我们进入到IServiceManager.h文件中:
class IServiceManager : public IInterface
{
public:
DECLARE_META_INTERFACE(ServiceManager);
/**
* Retrieve an existing service, blocking for a few seconds
* if it doesn't yet exist.
*/
virtual sp<IBinder> getService( const String16& name) const = 0;
/**
* Retrieve an existing service, non-blocking.
*/
virtual sp<IBinder> checkService( const String16& name) const = 0;
/**
* Register a service.
*/
virtual status_t addService( const String16& name,
const sp<IBinder>& service,
bool allowIsolated = false) = 0;
/**
* Return list of all existing services.
*/
virtual Vector<String16> listServices() = 0;
enum {
GET_SERVICE_TRANSACTION = IBinder::FIRST_CALL_TRANSACTION,
CHECK_SERVICE_TRANSACTION,
ADD_SERVICE_TRANSACTION,
LIST_SERVICES_TRANSACTION,
};
};
sp<IServiceManager> defaultServiceManager();
template<typename INTERFACE>
status_t getService(const String16& name, sp<INTERFACE>* outService)
{
const sp<IServiceManager> sm = defaultServiceManager();
if (sm != NULL) {
*outService = interface_cast<INTERFACE>(sm->getService(name));
if ((*outService) != NULL) return NO_ERROR;
}
return NAME_NOT_FOUND;
}
bool checkCallingPermission(const String16& permission);
bool checkCallingPermission(const String16& permission,
int32_t* outPid, int32_t* outUid);
bool checkPermission(const String16& permission, pid_t pid, uid_t uid);
class BnServiceManager : public BnInterface<IServiceManager>
{
public:
virtual status_t onTransact( uint32_t code,
const Parcel& data,
Parcel* reply,
uint32_t flags = 0);
};
};
我们发现这个文件的形式很像java中的aidl文件,不过实事上作用和aidl的作用差不多(我们暂且这么理解)。相信这一块代码大家几乎都能读懂,但是有一个地方我要重点说明下,就是下面这行代码:
DECLARE_META_INTERFACE(ServiceManager);
乍一看似乎看不出来什么,只知道大概是声明什么东西一样(里面有个DECLARE),那么我们就进去瞧瞧:
#define DECLARE_META_INTERFACE(INTERFACE) \
static const android::String16 descriptor; \
static android::sp<I##INTERFACE> asInterface( \
const android::sp<android::IBinder>& obj); \
virtual const android::String16& getInterfaceDescriptor() const; \
I##INTERFACE(); \
virtual ~I##INTERFACE();
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \
const android::String16 I##INTERFACE::descriptor(NAME); \
const android::String16& \
I##INTERFACE::getInterfaceDescriptor() const { \
return I##INTERFACE::descriptor; \
} \
android::sp<I##INTERFACE> I##INTERFACE::asInterface( \
const android::sp<android::IBinder>& obj) \
{ \
android::sp<I##INTERFACE> intr; \
if (obj != NULL) { \
intr = static_cast<I##INTERFACE*>( \
obj->queryLocalInterface( \
I##INTERFACE::descriptor).get()); \
if (intr == NULL) { \
intr = new Bp##INTERFACE(obj); \
} \
} \
return intr; \
} \
I##INTERFACE::I##INTERFACE() { } \
I##INTERFACE::~I##INTERFACE() { }
发现这里是两个宏定义,并且呈现声明与实现的关系,所以我们知道他们上面调用的DECLARE_META_INTERFACE是什么意思了。既然在IServiceManager.h中出现了DECLARE_META_INTERFACE,那么我们就可以很自然的想到在IServiceManager.cpp中有IMPLEMENT_META_INTERFACE了,在IServiceManager.cpp中是这样调用的:
IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager")
现在我们将这两个宏转换成具体代码:
//对应于DECLARE_META_INTERFACE(ServiceManager)
static const android::String16 descriptor; \
static android::sp<IServiceManager> asInterface( \
const android::sp<android::IBinder>& obj); \
virtual const android::String16& getInterfaceDescriptor() const; \
IServiceManager(); \
virtual ~IServiceManager();
//对应于IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager")
const android::String16 IServiceManager::descriptor("android.os.IServiceManager");
const android::String16& \
IServiceManager::getInterfaceDescriptor() const { \
return IServiceManager::descriptor; \
} \
android::sp<IServiceManager> IServiceManager::asInterface( \
const android::sp<android::IBinder>& obj) \
{ \
android::sp<IServiceManager> intr; \
if (obj != NULL) { \
intr = static_cast<IServiceManager*>( \
obj->queryLocalInterface( \
IServiceManager::descriptor).get()); \
if (intr == NULL) { \
intr = new BpServiceManager(obj); \
} \
} \
return intr; \
} \
IServiceManager::IServiceManager() { } \
IServiceManager::~IServiceManager() { }
在代码中标明了他们之间的对应关系,读者也阔以自己尝试去转化下。
那么其实上面那么一大段分析都是围绕着这个interface_cast方法来讲的,也就是说上面的gDefaultServiceManager变量是一个ServiceManager,准确说是一个BpServiceManager,所以defaultServiceManager()函数的返回值就是BpServiceManager。
总结:1.首先在ProcessState::self()中去打开Binder驱动结点(打开"dev/binder"结点).
2.通过defaultServiceManager()获取一个BpServiceManager。
3.IServiceManager家族的继承关系:
>BpServiceManager继承自BpInterface,而BpInterface继承自BpRefBase与IServiceManager
>BnServiceManager继承自BnInterface,而BnInterface继承自BBinder与IServiceManager
我们现在再回到MediaServer的main函数里面,它里面有调用了这么一行函数:
MediaPlayerService::instantiate();
案例主角登场了,我们进去看看具体实现:
void MediaPlayerService::instantiate() {
defaultServiceManager()->addService(
String16("media.player"), new MediaPlayerService());
}
还记得前面分析过的这个defaultServiceManager()返回的是什么吗,他就是BpServiceManager(里面带有一个BpBinder),所以这个调用就相当于这样的:
BpServiceManager->addService(
String16("media.player"), new MediaPlayerService());
我们再去BpServiceManager中去看看addService函数:
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated)
{
Parcel data, reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);
data.writeStrongBinder(service);
data.writeInt32(allowIsolated ? 1 : 0);
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}
在这里我先提一个小细节,在上面调用addService函数的时候明明传入的是两个参数的,但是这里确实三个参数,我来解释下:是因为在IServiceManager.h这个头文件中有这样的申明:
virtual status_t addService( const String16& name,
const sp<IBinder>& service,
bool allowIsolated = false) = 0;
这个函数将最后一个参数赋上了默认值,所以在调用的时候可以不传入第三个参数也是可以的,经过测试,如果第三个参数没有默认值,那这样的话,该函数在调用的时候就需要传入三个参数了。
现在我们回来继续看addService函数的内部,那么在该函数中,我们要了解到的是:
1>该函数是一个业务层的函数,他将数据进行打包,并且传给了BpBinder。
2>通过BpBinder的transact函数来通信,这是把通信工作交给了BpBinder了。
通过以上分析我们知道该函数的主要作用就是将数据打包成data,再交给BpBinder的通信层处理。既然通信层在BpBinder的transact函数中,我们来看看这个函数内部:
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// Once a binder has died, it will never come back to life.
if (mAlive) {
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
在这里发现BpBinder又将任务交给了IPCThreadState了。我们来看IPCThreadState的self函数:
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS) {
restart:
const pthread_key_t k = gTLS;
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
return new IPCThreadState;
}
if (gShutdown) return NULL;
pthread_mutex_lock(&gTLSMutex);
if (!gHaveTLS) {
if (pthread_key_create(&gTLS, threadDestructor) != 0) {
pthread_mutex_unlock(&gTLSMutex);
return NULL;
}
gHaveTLS = true;
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}
其实就是就是获取一个IPCThreadState实例,我们看看他的构造函数:
IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()),
mMyThreadId(gettid()),
mStrictModePolicy(0),
mLastTransactionBinderFlags(0)
{
pthread_setspecific(gTLS, this);
clearCaller();
mIn.setDataCapacity(26);
mOut.setDataCapacity(26);
}
发现这里首先是调用setSpecific函数将自己设置进去,然后对mIn,mOut设置。这里的mIn,mOut其实都是Parcel,在这里将他们看做是发送和接收命令的缓冲区。
到这里我们再总结一下,总结:
在调用addService的时候调用了remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply),因为remote返回的实际上就是BpBinder,所以继续调用BpBinder->transact,而在BpBinder中发现其实调用的是IPCThreadState的transact。
(Main_MediaServer.main()->MediaPlayerService.instantiate【MediaPlayerService初始化】->BpServiceManager.addService()->remote().transact->BpBinder.transact->IPCThreadState.transact)
现在我们来继续看IPCThreadState的transact函数:
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
status_t err = data.errorCheck();
flags |= TF_ACCEPT_FDS;
IF_LOG_TRANSACTIONS() {
TextOutput::Bundle _b(alog);
alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
<< handle << " / code " << TypeCode(code) << ": "
<< indent << data << dedent << endl;
}
if (err == NO_ERROR) {
LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
(flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
if (err != NO_ERROR) {
if (reply) reply->setError(err);
return (mLastError = err);
}
if ((flags & TF_ONE_WAY) == 0) {
#if 0
if (code == 4) { // relayout
ALOGI(">>>>>> CALLING transaction 4");
} else {
ALOGI(">>>>>> CALLING transaction %d", code);
}
#endif
if (reply) {
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
#if 0
if (code == 4) { // relayout
ALOGI("<<<<<< RETURNING transaction 4");
} else {
ALOGI("<<<<<< RETURNING transaction %d", code);
}
#endif
IF_LOG_TRANSACTIONS() {
TextOutput::Bundle _b(alog);
alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
<< handle << ": ";
if (reply) alog << indent << *reply << dedent << endl;
else alog << "(none requested)" << endl;
}
} else {
err = waitForResponse(NULL, NULL);
}
return err;
}
在这个函数中,参数中有两个参数不是在addService的时候就传过来的,分别是handle与flag,handle是我们要跟服务端Binder通信的句柄,flag默认值是0。现在我们关注这个函数中的这个地方:
if (err == NO_ERROR) {
LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
(flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
这里的BC_TRANSACTION,他是应用程序向Binder设备发送消息的消息码,而Binder设备向应用程序回复消息码以BR_开头。消息码定义在/bionic/libc/kernel/uapi/linux/binder.h文件中。现在再进入到writeTransactionData函数中:
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;
tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
tr.target.handle = handle;
tr.code = code;
tr.flags = binderFlags;
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
tr.flags |= TF_STATUS_CODE;
*statusBuffer = err;
tr.data_size = sizeof(status_t);
tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
tr.offsets_size = 0;
tr.data.ptr.offsets = 0;
} else {
return (mLastError = err);
}
mOut.writeInt32(cmd);
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}
根据这个函数名以及函数里面的具体体现,相信也明白该函数的作用了,没错,他就是将我们要请求的消息写入到mOut中了。不过在这里面我们要注意几个地方:
1>binder_transaction_data tr:binder_transaction_data是和Binder设备通信的数据结构。
2>tr.target.handle = handle: 将handle值传递给了target,用来标识目的端,其中0是ServiceManager的标志。
3>tr.code = code; code是消息码,也就是在我们addService的时候传入的那个ADD_SERVICE_TRANSACTION的标志码。
4>mOut.writeInt32(cmd);mOut.write(&tr, sizeof(tr));把命令写到mOut中。
现在已经把addService的请求信息写到mOut中了,接下来就是看发送请求和接受回复部分的实现了,在transact中我们可以知道接受回复消息的函数是waitForResponse了,我们接着去look look:
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
uint32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = (uint32_t)mIn.readInt32();
IF_LOG_COMMANDS() {
alog << "Processing waitForResponse Command: "
<< getReturnString(cmd) << endl;
}
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
if (!reply && !acquireResult) goto finish;
break;
case BR_DEAD_REPLY:
err = DEAD_OBJECT;
goto finish;
case BR_FAILED_REPLY:
err = FAILED_TRANSACTION;
goto finish;
case BR_ACQUIRE_RESULT:
{
ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
const int32_t result = mIn.readInt32();
if (!acquireResult) continue;
*acquireResult = result ? NO_ERROR : INVALID_OPERATION;
}
goto finish;
case BR_REPLY:
{
binder_transaction_data tr;
err = mIn.read(&tr, sizeof(tr));
ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
if (err != NO_ERROR) goto finish;
if (reply) {
if ((tr.flags & TF_STATUS_CODE) == 0) {
reply->ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t),
freeBuffer, this);
} else {
err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
freeBuffer(NULL,
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
}
} else {
freeBuffer(NULL,
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
continue;
}
}
goto finish;
default:
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
}
finish:
if (err != NO_ERROR) {
if (acquireResult) *acquireResult = err;
if (reply) reply->setError(err);
mLastError = err;
}
return err;
}
在这里面我们看到if ((err=talkWithDriver()) < NO_ERROR) break;很直接的对白,直接开始跟驱动对话了,那么也就是去请求了,并且返回一个err码了,现在我们已经发送了数据了,假设马上收到了回复,那他会怎么做呢?我们来看看executeCommand(cmd)函数:
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;
switch ((uint32_t)cmd) {
case BR_ERROR:
result = mIn.readInt32();
break;
case BR_OK:
break;
case BR_ACQUIRE:
refs = (RefBase::weakref_type*)mIn.readPointer();
obj = (BBinder*)mIn.readPointer();
ALOG_ASSERT(refs->refBase() == obj,
"BR_ACQUIRE: object %p does not match cookie %p (expected %p)",
refs, obj, refs->refBase());
obj->incStrong(mProcess.get());
IF_LOG_REMOTEREFS() {
LOG_REMOTEREFS("BR_ACQUIRE from driver on %p", obj);
obj->printRefs();
}
mOut.writeInt32(BC_ACQUIRE_DONE);
mOut.writePointer((uintptr_t)refs);
mOut.writePointer((uintptr_t)obj);
break;
case BR_RELEASE:
refs = (RefBase::weakref_type*)mIn.readPointer();
obj = (BBinder*)mIn.readPointer();
ALOG_ASSERT(refs->refBase() == obj,
"BR_RELEASE: object %p does not match cookie %p (expected %p)",
refs, obj, refs->refBase());
IF_LOG_REMOTEREFS() {
LOG_REMOTEREFS("BR_RELEASE from driver on %p", obj);
obj->printRefs();
}
mPendingStrongDerefs.push(obj);
break;
case BR_INCREFS:
refs = (RefBase::weakref_type*)mIn.readPointer();
obj = (BBinder*)mIn.readPointer();
refs->incWeak(mProcess.get());
mOut.writeInt32(BC_INCREFS_DONE);
mOut.writePointer((uintptr_t)refs);
mOut.writePointer((uintptr_t)obj);
break;
case BR_DECREFS:
refs = (RefBase::weakref_type*)mIn.readPointer();
obj = (BBinder*)mIn.readPointer();
// NOTE: This assertion is not valid, because the object may no
// longer exist (thus the (BBinder*)cast above resulting in a different
// memory address).
//ALOG_ASSERT(refs->refBase() == obj,
// "BR_DECREFS: object %p does not match cookie %p (expected %p)",
// refs, obj, refs->refBase());
mPendingWeakDerefs.push(refs);
break;
case BR_ATTEMPT_ACQUIRE:
refs = (RefBase::weakref_type*)mIn.readPointer();
obj = (BBinder*)mIn.readPointer();
{
const bool success = refs->attemptIncStrong(mProcess.get());
ALOG_ASSERT(success && refs->refBase() == obj,
"BR_ATTEMPT_ACQUIRE: object %p does not match cookie %p (expected %p)",
refs, obj, refs->refBase());
mOut.writeInt32(BC_ACQUIRE_RESULT);
mOut.writeInt32((int32_t)success);
}
break;
case BR_TRANSACTION:
{
binder_transaction_data tr;
result = mIn.read(&tr, sizeof(tr));
ALOG_ASSERT(result == NO_ERROR,
"Not enough command data for brTRANSACTION");
if (result != NO_ERROR) break;
Parcel buffer;
buffer.ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);
const pid_t origPid = mCallingPid;
const uid_t origUid = mCallingUid;
const int32_t origStrictModePolicy = mStrictModePolicy;
const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;
mCallingPid = tr.sender_pid;
mCallingUid = tr.sender_euid;
mLastTransactionBinderFlags = tr.flags;
int curPrio = getpriority(PRIO_PROCESS, mMyThreadId);
if (gDisableBackgroundScheduling) {
if (curPrio > ANDROID_PRIORITY_NORMAL) {
// We have inherited a reduced priority from the caller, but do not
// want to run in that state in this process. The driver set our
// priority already (though not our scheduling class), so bounce
// it back to the default before invoking the transaction.
setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL);
}
} else {
if (curPrio >= ANDROID_PRIORITY_BACKGROUND) {
// We want to use the inherited priority from the caller.
// Ensure this thread is in the background scheduling class,
// since the driver won't modify scheduling classes for us.
// The scheduling group is reset to default by the caller
// once this method returns after the transaction is complete.
set_sched_policy(mMyThreadId, SP_BACKGROUND);
}
}
//ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid);
Parcel reply;
status_t error;
IF_LOG_TRANSACTIONS() {
TextOutput::Bundle _b(alog);
alog << "BR_TRANSACTION thr " << (void*)pthread_self()
<< " / obj " << tr.target.ptr << " / code "
<< TypeCode(tr.code) << ": " << indent << buffer
<< dedent << endl
<< "Data addr = "
<< reinterpret_cast<const uint_t*>(tr.data.ptr.buffer)
<< ", offsets addr="
<< reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << endl;
}
if (tr.target.ptr) {
sp<BBinder> b((BBinder*)tr.cookie);
error = b->transact(tr.code, buffer, &reply, tr.flags);
} else {
error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
}
//ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n",
// mCallingPid, origPid, origUid);
if ((tr.flags & TF_ONE_WAY) == 0) {
LOG_ONEWAY("Sending reply to %d!", mCallingPid);
if (error < NO_ERROR) reply.setError(error);
sendReply(reply, 0);
} else {
LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
}
mCallingPid = origPid;
mCallingUid = origUid;
mStrictModePolicy = origStrictModePolicy;
mLastTransactionBinderFlags = origTransactionBinderFlags;
IF_LOG_TRANSACTIONS() {
TextOutput::Bundle _b(alog);
alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj "
<< tr.target.ptr << ": " << indent << reply << dedent << endl;
}
}
break;
case BR_DEAD_BINDER:
{
BpBinder *proxy = (BpBinder*)mIn.readPointer();
proxy->sendObituary();
mOut.writeInt32(BC_DEAD_BINDER_DONE);
mOut.writePointer((uintptr_t)proxy);
} break;
case BR_CLEAR_DEATH_NOTIFICATION_DONE:
{
BpBinder *proxy = (BpBinder*)mIn.readPointer();
proxy->getWeakRefs()->decWeak(proxy);
} break;
case BR_FINISHED:
result = TIMED_OUT;
break;
case BR_NOOP:
break;
case BR_SPAWN_LOOPER:
mProcess->spawnPooledThread(false);
break;
default:
printf("*** BAD COMMAND %d received from Binder driver\n", cmd);
result = UNKNOWN_ERROR;
break;
}
if (result != NO_ERROR) {
mLastError = result;
}
return result;
}
还记得我们在前面writeTransactionData中传入的那个命令么:BC_TRANSACTION,他是应用程序向Binder设备发送消息的消息码,而Binder设备向应用程序回复消息码以BR_开头,但是在这里,我们目前只看BR_TRANSACTION,我们先来看这个分支里面的这个地方:
if (tr.target.ptr) {
sp<BBinder> b((BBinder*)tr.cookie);
error = b->transact(tr.code, buffer, &reply, tr.flags);
} else {
error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
}
这里的b实际上就是实现BnServiceXXX的那个对象。the_context_object是IPCThreadState.cpp中定义的一个全局变量,可通过setTheContextObject函数设置。
我们再来看看这里:
case BR_DEAD_BINDER:
{
BpBinder *proxy = (BpBinder*)mIn.readPointer();
proxy->sendObituary();
mOut.writeInt32(BC_DEAD_BINDER_DONE);
mOut.writePointer((uintptr_t)proxy);
} break;
从名字可以看出来,他是收到了Binder驱动发来的service死掉的消息,看来只有Bp端能接收到了。
再来看看这里:
case BR_SPAWN_LOOPER:
mProcess->spawnPooledThread(false);
break;
这里将收到来自驱动的指示以创建一个新线程,用于和Binder通信。
我们知道结果是怎么处理了,但是我们还不知道客户端是如何与服务端通信的,那么我们就进入到talkWithDriver函数中:
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
if (mProcess->mDriverFD <= 0) {
return -EBADF;
}
binder_write_read bwr;
// Is the read buffer empty?
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
// We don't want to write anything if we are still reading
// from data left in the input buffer and the caller
// has requested to read the next data.
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();
// This is what we'll read.
if (doReceive && needRead) {
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
IF_LOG_COMMANDS() {
TextOutput::Bundle _b(alog);
if (outAvail != 0) {
alog << "Sending commands to driver: " << indent;
const void* cmds = (const void*)bwr.write_buffer;
const void* end = ((const uint8_t*)cmds)+bwr.write_size;
alog << HexDump(cmds, bwr.write_size) << endl;
while (cmds < end) cmds = printCommand(alog, cmds);
alog << dedent;
}
alog << "Size of receive buffer: " << bwr.read_size
<< ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
}
// Return immediately if there is nothing to do.
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
IF_LOG_COMMANDS() {
alog << "About to read/write, write size = " << mOut.dataSize() << endl;
}
#if defined(HAVE_ANDROID_OS)
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
#else
err = INVALID_OPERATION;
#endif
if (mProcess->mDriverFD <= 0) {
err = -EBADF;
}
IF_LOG_COMMANDS() {
alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
}
} while (err == -EINTR);
IF_LOG_COMMANDS() {
alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: "
<< bwr.write_consumed << " (of " << mOut.dataSize()
<< "), read consumed: " << bwr.read_consumed << endl;
}
if (err >= NO_ERROR) {
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())
mOut.remove(0, bwr.write_consumed);
else
mOut.setDataSize(0);
}
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);
mIn.setDataPosition(0);
}
IF_LOG_COMMANDS() {
TextOutput::Bundle _b(alog);
alog << "Remaining data size: " << mOut.dataSize() << endl;
alog << "Received commands from driver: " << indent;
const void* cmds = mIn.data();
const void* end = mIn.data() + mIn.dataSize();
alog << HexDump(cmds, mIn.dataSize()) << endl;
while (cmds < end) cmds = printReturnCommand(alog, cmds);
alog << dedent;
}
return NO_ERROR;
}
return err;
}
这个函数中的参数值默认是传入true。ok,现在我们再来看看这里面的几个关键点:
1>binder_write_read bwr;binder_write_read是用来和Binder设备交换数据的结构。
2>bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();请求命令的填充。
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();接受数据缓冲区信息的填充,如果以后收到数据,就直接填在mIn中了。(在这里,可以看到mIn就获取到了服务端的数据了。)
我们再看看这里:
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
看来是通过ioctl的方式来通信而不是read/write方式了。
现在业务与通信部分讲完了,我们回头到main函数中,还记得main函数中最后两个调用:
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
我们一个个来看:
void ProcessState::startThreadPool()
{
AutoMutex _l(mLock);
if (!mThreadPoolStarted) {
mThreadPoolStarted = true;
spawnPooledThread(true);
}
}
如果已经startThreadPool的话,那么这个函数就没有实质性的作用,不过我们看是看看spawnPooledThread这个函数,这个函数在上文中提到过,就是创建一个新线程。
看看他内部是如何实现的:
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
String8 name = makeBinderThreadName();
ALOGV("Spawning new pooled thread, name=%s\n", name.string());
sp<Thread> t = new PoolThread(isMain);
t->run(name.string());
}
}
在这里面创建一个PoolThread,这个PoolThread是在ProcessState中定义的一个Thread子类的实现,我们看看他的实现:
class PoolThread : public Thread
{
public:
PoolThread(bool isMain)
: mIsMain(isMain)
{
}
protected:
virtual bool threadLoop()
{
IPCThreadState::self()->joinThreadPool(mIsMain);
return false;
}
const bool mIsMain;
};
我们看看这个joinThreadPool函数:
void IPCThreadState::joinThreadPool(bool isMain)
{
LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());
mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
// This thread may have been spawned by a thread that was in the background
// scheduling group, so first we will make sure it is in the foreground
// one to avoid performing an initial transaction in the background.
set_sched_policy(mMyThreadId, SP_FOREGROUND);
status_t result;
do {
processPendingDerefs();
// now get the next command to be processed, waiting if necessary
result = getAndExecuteCommand();
if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
mProcess->mDriverFD, result);
abort();
}
// Let this thread exit the thread pool if it is no longer
// needed and it is not the main process thread.
if(result == TIMED_OUT && !isMain) {
break;
}
} while (result != -ECONNREFUSED && result != -EBADF);
LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p\n",
(void*)pthread_self(), getpid(), (void*)result);
mOut.writeInt32(BC_EXIT_LOOPER);
talkWithDriver(false);
}
注意这里: mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);如果isMain为true我们则需要循环处理,把请求信息写到mOut中,待会一起发出去。我们再进入到getAndExecuteCommand()函数中看看:
status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;
result = talkWithDriver();
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail();
if (IN < sizeof(int32_t)) return result;
cmd = mIn.readInt32();
IF_LOG_COMMANDS() {
alog << "Processing top-level Command: "
<< getReturnString(cmd) << endl;
}
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount++;
pthread_mutex_unlock(&mProcess->mThreadCountLock);
result = executeCommand(cmd);
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount--;
pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
pthread_mutex_unlock(&mProcess->mThreadCountLock);
// After executing the command, ensure that the thread is returned to the
// foreground cgroup before rejoining the pool. The driver takes care of
// restoring the priority, but doesn't do anything with cgroups so we
// need to take care of that here in userspace. Note that we do make
// sure to go in the foreground after executing a transaction, but
// there are other callbacks into user code that could have changed
// our group so we want to make absolutely sure it is put back.
set_sched_policy(mMyThreadId, SP_FOREGROUND);
}
return result;
}
它又调用了talkWithDriver,发送命令,读取请求。
到这里,我们的MediaPlayerService差不多讲完了,那么我们再对自IPCThreadState.transact函数到talkWithDriver总结一下:首先是通过writeTransactionData函数向mOut中写入请求数据-->通过waitForResponse函数通过talkWithDriver函数与Binder驱动通信,并拿到返回的数据写入到mIn中-->最后返回给客户端。
希望该篇文章对各位小伙伴有作用哈,如有错误欢迎指正,谢谢!
下一篇文章就来介绍ServiceManager如何成为大管家。敬请期待。