Android源码阅读---binder客户端进程

本文深入解析了Android系统中Binder通信机制的工作原理,从客户端进程的角度出发,详细介绍了Binder客户端进程如何通过getservice方法与ServiceManager交互,以及Binder驱动在其中扮演的角色。涵盖了Binder客户端进程的调用流程、数据打包与传输过程,以及Binder驱动的读写操作。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

binder客户端进程

  1. 在service manager这个进制程序起来后,系统就可以为各个服务提供注册功能,同时也可以为需要服务的客户端进程提供服务的查询工作了。
  2. 从功能角度来看,service manager其实也是一个服务进程,它提供服务注册、服务查询的功能,然后相对于只提供服务的service manager,其他服务进程和请求服务的客户端进程都是客户进程,它们向service manager发出服务请求
  3. 然后客户端进程如何使用servicemanager进程提供的服务呢?
    通过阅读Service Manager进程源码可以知道:
  4. 打开binder驱动设备
  5. 通过函数mmap将客户端进程的内存地址与service manager进程的内存地址映射成同一块物理地址
  6. 通过binder驱动向service manager发送请求,即将要发生给service manager进程的信息写入这块物理地址中
  7. 获得service manager处理后的结果

1. 从getservice出发

应用进程通过ServiceManager的getservice来获得服务的处理过程,实际上就是该进程向二进制进程service manager请求服务过程,然后看看它具体是怎么做的
首先看看frameworks/base/core/java/android/os/ServiceManager.java这个类的源码

public final class ServiceManager {
    private static final String TAG = "ServiceManager";
    private static IServiceManager sServiceManager;
    private static HashMap<String, IBinder> sCache = new HashMap<String, IBinder>();/*缓存之前请求过的服务,这样针对请求过的服务就不用再次去向二进制进程service manager请求了*/
    private static IServiceManager getIServiceManager() {
        if (sServiceManager != null) {
            return sServiceManager;
        }
        // Find the service manager
        sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject());
        return sServiceManager;
    }
    /**
     * Returns a reference to a service with the given name.
     * @param name the name of the service to get
     * @return a reference to the service, or <code>null</code> if the service doesn't exist
     */
    public static IBinder getService(String name) {
/*
首先,在缓存sCache中查找
然后,如果缓存中没找到,向二进制进程service manager请求
*/
        try {
            IBinder service = sCache.get(name); 
            if (service != null) {
                return service;
            } else {
                return getIServiceManager().getService(name);
            }
        } catch (RemoteException e) {
            Log.e(TAG, "error in getService", e);
        }
        return null;
    }
    /**
     * Place a new @a service called @a name into the service manager.
     * 
     * @param name the name of the new service
     * @param service the service object
     */
    public static void addService(String name, IBinder service) {
        try {
            getIServiceManager().addService(name, service, false);
        } catch (RemoteException e) {
            Log.e(TAG, "error in addService", e);
        }
    }
    /**
     * Place a new @a service called @a name into the service manager.
     * 
     * @param name the name of the new service
     * @param service the service object
     * @param allowIsolated set to true to allow isolated sandboxed processes
     * to access this service
     */
    public static void addService(String name, IBinder service, boolean allowIsolated) {
        try {
            getIServiceManager().addService(name, service, allowIsolated);
        } catch (RemoteException e) {
            Log.e(TAG, "error in addService", e);
        }
    }
    /**
     * Retrieve an existing service called @a name from theservice manager. Non-blocking.
     */
    public static IBinder checkService(String name) {
        try {
            IBinder service = sCache.get(name);
            if (service != null) {
                return service;
            } else {
                return getIServiceManager().checkService(name);
            }
        } catch (RemoteException e) {
            Log.e(TAG, "error in checkService", e);
            return null;
        }
    }

    /**
     * Return a list of all currently running services.
     * @return an array of all currently running services, or <code>null</code> in case of an exception
     */
    public static String[] listServices() {
        try {
            return getIServiceManager().listServices();
        } catch (RemoteException e) {
            Log.e(TAG, "error in listServices", e);
            return null;
        }
    }

    /**
     * This is only intended to be called when the process is first being brought up and bound by the activity manager. There is only one thread in the proces at that time, so no locking is done.
     * 
     * @param cache the cache of service references
     * @hide
     */
    public static void initServiceCache(Map<String, IBinder> cache) {
        if (sCache.size() != 0) {
            throw new IllegalStateException("setServiceCache may only be called once");
        }
        sCache.putAll(cache);
    }
}

ServiceManager类只是对ServiceManagerProxy对象的一个简单包装,如ServiceManager类的getService方法其实质还是调用的ServiceManagerProxy对象的getService方法。
然后ServiceManager.getService其实相当于new ServiceManagerProxy(BinderInternal.getContextObject()).getService,然后看看ServiceManagerProxy类的getService方法:

/*
1.将请求的服务器名字包装在Parcel结构中
2. 通过IBinder的transact方法,向service manager进程发出请求
3. 从Parcel对象reply中获取反馈回来的结果
*/
    public IBinder getService(String name) throws RemoteException {
        Parcel data = Parcel.obtain();
        Parcel reply = Parcel.obtain();
        data.writeInterfaceToken(IServiceManager.descriptor);
        data.writeString(name);
        mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);/*mRemote就是BinderInternal.getContextObject()对象,执行transact后,会将结果返回到reply中*/
        IBinder binder = reply.readStrongBinder();/*将返回的结果从reply中腾出来*/
        reply.recycle();
        data.recycle();
        return binder;
    }

想知道IBinder对象怎么向service manager进程发出请求的,

  1. 首先,弄清楚BinderInternal.getContextObject()是个什么对象,
  2. 然后再看BinderInternal.getContextObject()对象的transact方法做了些什么工作。

1.1 BinderInternal.getContextObject()对象

BinderInternal.getContextObject()的实现如下:

    /**
     * Return the global "context object" of the system. This is usually
     * an implementation of IServiceManager, which you can use to find
     * other services.
     */
    public static final native IBinder getContextObject();

从代码中看到getContextObject方法是个native方法,在看在文件frameworks/base/core/jni/android_util_Binder.cpp中有它的实现

static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
{
    sp<IBinder> b = ProcessState::self()->getContextObject(NULL);/*创建一个BpBinder对象*/
    return javaObjectForIBinder(env, b); /*将cpp层的BpBinder转化成java层的BinderProxy对象*/ 
} 

整个方法就2句话:

  • 首先创建一个IBinder接口型的对象
  • 将前面创建的对象转化为java层的IBinder接口型的对象

1.1.1 c++层IBinder接口型的对象

首先看看ProcessState::self()->getContextObject(NULL)创建的IBinder对象

sp<ProcessState> ProcessState::self()
{
    Mutex::Autolock _l(gProcessMutex);
    if (gProcess != NULL) {/*判断全局变量gProcess 是否为空,如果不为空,将之前创建的对象返回,第一次进入肯定为空,会走下面的逻辑*/
        return gProcess;
    }
    gProcess = new ProcessState; /*new 了一个ProcessState对象*/
    return gProcess;
}
ProcessState::ProcessState()
    : mDriverFD(open_driver())/*打开binder驱动*/
    , mVMStart(MAP_FAILED)
    , mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
    , mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
    , mExecutingThreadsCount(0)
    , mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
    , mStarvationStartTimeMs(0)
    , mManagesContexts(false)
    , mBinderContextCheckFunc(NULL)
    , mBinderContextUserData(NULL)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)
{
    if (mDriverFD >= 0) {/*如果binder驱动打开成功*/
        // mmap the binder, providing a chunk of virtual address space to receive transactions.
        mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);/*将自己进程的大小为BINDER_VM_SIZE内存映射*/
/*#define BINDER_VM_SIZE ((1*1024*1024) - (4096 *2))=1016k大小的空间
*/
        if (mVMStart == MAP_FAILED) {
            // *sigh*
            ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
            close(mDriverFD);
            mDriverFD = -1;
        }
    }
    LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened. Terminating.");
}

sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)/*传进去的参数NULL其实并没有使用*/
{
    return getStrongProxyForHandle(0);
}
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)/*传入参数为0,代表server manager这个服务*/
{
    sp<IBinder> result;
    AutoMutex _l(mLock);
    handle_entry* e = lookupHandleLocked(handle); /*查找目标服务的binder信息,返回的值,正常情况下都不会为空*/
    if (e != NULL) {
        // We need to create a new BpBinder if there isn't currently one, OR we are unable to acquire a weak reference on this current one. See comment in getWeakProxyForHandle() for more info about this.
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {/*第一次进入 ,refs和b 都是为空的*/
            if (handle == 0) {
                // Special case for context manager...
                // The context manager is the only object for which we create a BpBinder proxy without already holding a reference.
                // Perform a dummy transaction to ensure the context manager is registered before we create the first local reference to it (which will occur when creating the BpBinder).
                // If a local reference is created for the BpBinder when the context manager is not present, the driver will fail to provide a reference to the context manager, but the driver API does not return status.
                //
                // Note that this is not race-free if the context manager dies while this code runs.
                //
                // TODO: add a driver API to wait for context manager, or stop special casing handle 0 for context manager and add a driver API to get a handle to the context manager with proper reference counting.
                Parcel data;
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, NULL, 0);
                if (status == DEAD_OBJECT)
                   return NULL;
            }

            b = new BpBinder(handle);  /*创建了一个c++层的BpBinder对象*/
            e->binder = b;
            if (b) e->refs = b->getWeakRefs(); /*将BpBinder对象弱引用也存起来*/
            result = b;
        } else {
            // This little bit of nastyness is to allow us to add a primary reference to the remote proxy when this team doesn't have one but another team is sending the handle to us.
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }
    return result;
}
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
    const size_t N=mHandleToObject.size(); /* 全局变量mHandleToObject是一个Vector类型,记录不同进程的handle信息*/
    if (N <= (size_t)handle) {/*如果查询的这个handle不存在, 创建一个*/
        handle_entry e;
        e.binder = NULL;
        e.refs = NULL;
        status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
        if (err < NO_ERROR) return NULL;
    }
    return &mHandleToObject.editItemAt(handle);
}

所以到现在为止,可以确定ProcessState::self()->getContextObject(NULL)创建的是一个c++层的BpBinder对象。

1.1.2 java层IBinder接口型对象

然后看看javaObjectForIBinder(env, b)函数将得到的BpBinder对象转化成什么类型对象,又回到文件frameworks/base/core/jni/android_util_Binder.cpp中查看javaObjectForIBinder方法实现:

static struct bindernative_offsets_t {  jclass mClass;  jmethodID mExecTransact;    jfieldID mObject;} gBinderOffsets;/*静态变量*/
static struct binderproxy_offsets_t{ jclass mClass;  jmethodID mConstructor; jmethodID mSendDeathNotice; jfieldID mObject;  jfieldID mSelf; jfieldID mOrgue;} gBinderProxyOffsets; /*静态变量*/
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
    if (val == NULL) return NULL;/*判空处理,此时val 是一个BpBinder对象*/
    if (val->checkSubclass(&gBinderOffsets)) {/*checkSubclass是个抽象函数 定义在IBinder.h中,并在Binder.cpp文件中实现,该函数直接返回false,BpBinder对象继承自IBinder.h*/
        // One of our own!
        jobject object = static_cast<JavaBBinder*>(val.get())->object();
        LOGDEATH("objectForBinder %p: it's our own %p!\n", val.get(), object);
        return object;
    }

    // For the rest of the function we will hold this lock, to serialize looking/creation/destruction of Java proxies for native Binder proxies.
    AutoMutex _l(mProxyLock);

    // Someone else's... do we know about it?
    jobject object = (jobject)val->findObject(&gBinderProxyOffsets);
    if (object != NULL) {
        jobject res = jniGetReferent(env, object);
        if (res != NULL) {
            ALOGV("objectForBinder %p: found existing %p!\n", val.get(), res);
            return res;
        }
        LOGDEATH("Proxy object %p of IBinder %p no longer in working set!!!", object, val.get());
        android_atomic_dec(&gNumProxyRefs);
        val->detachObject(&gBinderProxyOffsets);
        env->DeleteGlobalRef(object);
    }

    object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);/*此处通过JNI调用java类android/os/BinderProxy*/
    if (object != NULL) {
        LOGDEATH("objectForBinder %p: created new proxy %p !\n", val.get(), object);
        // The proxy holds a reference to the native object.
        env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());/*并且把BpBinder对象指针赋值给android/os/BinderProxy对象的成员变量mObject*/
        val->incStrong((void*)javaObjectForIBinder);

        // The native object needs to hold a weak reference back to the proxy, so we can retrieve the same proxy if it is still active.
        jobject refObject = env->NewGlobalRef(env->GetObjectField(object, gBinderProxyOffsets.mSelf));/*拿到java类android/os/BinderProxy的属性mSelf*/
        val->attachObject(&gBinderProxyOffsets, refObject,  jnienv_to_javavm(env), proxy_cleanup);

        // Also remember the death recipients registered on this proxy
        sp<DeathRecipientList> drl = new DeathRecipientList;
        drl->incStrong((void*)javaObjectForIBinder);
        env->SetLongField(object, gBinderProxyOffsets.mOrgue, reinterpret_cast<jlong>(drl.get()));/*给java类android/os/BinderProxy的属性mOrgue赋值*/

        // Note that a new object reference has been created.
        android_atomic_inc(&gNumProxyRefs);
        incRefsCreated(env);
    }
    return object;
}

所以javaObjectForIBinder(env, b)函数将得到的BpBinder对象转化成java层的android/os/BinderProxy类对象

然后在回到最开始的位置,在ServiceManager .java文件中,getIServiceManager方法中拿到的是一个ServiceManagerProxy对象,创建ServiceManagerProxy对象时,传入的参数是一个 BinderProxy对象,等价于ServiceManagerNative.asInterface(BinderInternal.getContextObject())=new ServiceManagerProxy(new BinderProxy())

1.2 transact方法

在创建的ServiceManagerProxy对象中,传入的参数是一个 BinderProxy对象,即mRemote=new BinderProxy(), 所以getService方法中的transact方法是BinderProxy类中的transact方法。

    public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
        Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");/*检查Parcel对象*/
        if (Binder.isTracingEnabled()) { Binder.getTransactionTracker().addTrace(); }
        return transactNative(code, data, reply, flags);
    }
public native boolean transactNative(int code, Parcel data, Parcel reply,int flags) throws RemoteException;/*又使用native 方法,转入c++层,使用的是android_util_Binder.cpp
中的android_os_BinderProxy_transact方法*/

接下来,转入android_util_Binder.cpp文件中

static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj, jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
{
    if (dataObj == NULL) {/*向binder驱动传递的数据不能为空*/
        jniThrowNullPointerException(env, NULL);
        return JNI_FALSE;
    }

    Parcel* data = parcelForJavaObject(env, dataObj);/*将java层的Parcel转成c++层Parcel*/
    if (data == NULL) {
        return JNI_FALSE;
    }
    Parcel* reply = parcelForJavaObject(env, replyObj);
    if (reply == NULL && replyObj != NULL) {
        return JNI_FALSE;
    }

    IBinder* target = (IBinder*)env->GetLongField(obj, gBinderProxyOffsets.mObject); /*get到android/os/BinderProxy对象的成员变量mObject,在创建BinderProxy对象时,将BpBinder对象地址赋值给了成员变量mObject,所以此处target是个BpBinder对象*/
    if (target == NULL) {
        jniThrowException(env, "java/lang/IllegalStateException", "Binder has been finalized!");
        return JNI_FALSE;
    }
    ALOGV("Java code calling transact on %p in Java object %p with code %" PRId32 "\n", target, obj, code);

    bool time_binder_calls;
    int64_t start_millis;
    if (kEnableBinderSample) {
        // Only log the binder call duration for things on the Java-level main thread.
        // But if we don't
        time_binder_calls = should_time_binder_calls();

        if (time_binder_calls) {
            start_millis = uptimeMillis();
        }
    }

    status_t err = target->transact(code, *data, reply, flags);/*调用BpBinder的transact*/
    //if (reply) printf("Transact from Java code to %p received: ", target); reply->print();

    if (kEnableBinderSample) {
        if (time_binder_calls) {
            conditionally_log_binder_call(start_millis, target, code);
        }
    }
    if (err == NO_ERROR) {
        return JNI_TRUE;
    } else if (err == UNKNOWN_TRANSACTION) {
        return JNI_FALSE;
    }
    signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/, data->dataSize());
    return JNI_FALSE;
}

在看看BpBinder的transact做了些什么

status_t BpBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
/*
1. 创建一个IPCThreadState对象
2. 调用IPCThreadState对象的transact函数
*/
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(mHandle, code, data, reply, flags); /*调用IPCThreadState的transact*/
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }
    return DEAD_OBJECT;
}

在转入frameworks/native/libs/binder/IPCThreadState.cpp文件中

首先IPCThreadState::self()创建了一个单例的IPCThreadState对象

IPCThreadState::IPCThreadState(): mProcess(ProcessState::self()),mMyThreadId(gettid()), mStrictModePolicy(0),mLastTransactionBinderFlags(0){
    pthread_setspecific(gTLS, this);
    clearCaller();
    mIn.setDataCapacity(256);
    mOut.setDataCapacity(256);
}

IPCThreadState* IPCThreadState::self()
{
    if (gHaveTLS) {/*第一次进入 gHaveTLS为 false*/
restart:
        const pthread_key_t k = gTLS;
        IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
        if (st) return st;
        return new IPCThreadState;/*此处返回一个IPCThreadState对象*/
    }
    
    if (gShutdown) {
        ALOGW("Calling IPCThreadState::self() during shutdown is dangerous, expect a crash.\n");
        return NULL;
    }
    
    pthread_mutex_lock(&gTLSMutex);
    if (!gHaveTLS) {
        int key_create_value = pthread_key_create(&gTLS, threadDestructor);/*此处有个TLS机制,即线程本地储存空间Thread local storage,它可以保证某个变量只有线程自己访问有效*/
        if (key_create_value != 0) {
            pthread_mutex_unlock(&gTLSMutex);
            ALOGW("IPCThreadState::self() unable to create TLS key, expect a crash: %s\n",
                    strerror(key_create_value));
            return NULL;
        }
        gHaveTLS = true;
    }
    pthread_mutex_unlock(&gTLSMutex);
    goto restart;
}

然后,看看transact方法具体做了什么工作

status_t IPCThreadState::transact(int32_t handle,uint32_t code, const Parcel& data,Parcel* reply, uint32_t flags)
{
/*
1. 检查数据有效性
2. 数据打包
3. 获取结果
*/
    status_t err = data.errorCheck(); /*首先检查要传给binder驱动的数据data的有效性*/
    flags |= TF_ACCEPT_FDS;/*flag原始值为0*/
    IF_LOG_TRANSACTIONS() {
        TextOutput::Bundle _b(alog);
        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
            << handle << " / code " << TypeCode(code) << ": "
            << indent << data << dedent << endl;
    }
    
    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(), (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);/*打包数据准备传给binder驱动*/
    }
    if (err != NO_ERROR) {/* 错误处理*/
        if (reply) reply->setError(err);
        return (mLastError = err);
    }
    if ((flags & TF_ONE_WAY) == 0) {/*不是异步获取结果情况*/
        if (reply) {
/*waitForResponse函数在头文件中声明如下:
status_t waitForResponse(Parcel *reply,status_t *acquireResult=NULL);
即如果只有一个参数情况,默认第二个参数为NULL
*/
            err = waitForResponse(reply);/*获取从binder返回的结果并存入reply中*/
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        IF_LOG_TRANSACTIONS() {
            TextOutput::Bundle _b(alog);
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                << handle << ": ";
            if (reply) alog << indent << *reply << dedent << endl;
            else alog << "(none requested)" << endl;
        }
    } else {/*异步获取结果情况*/
        err = waitForResponse(NULL, NULL);
    }
    return err;
}
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{/*将要传给binder驱动的数据打包成binder_transaction_data 结构体类型,因为service manager在读取binder驱动传入的数据时候解析的也是binder_transaction_data 结构体数据*/
    binder_transaction_data tr;
    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;
    const status_t err = data.errorCheck();/*之前已经检查过data 这里又检查一次 */
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
        tr.offsets_size = 0;
        tr.data.ptr.offsets = 0;
    } else {
        return (mLastError = err);
    }
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));/*将binder驱动的BC_TRANSACTION命令和结构体数据binder_transaction_data 再包入Parcel中*/
    return NO_ERROR;
}

接下来重点看看waitForResponse函数,它会将之前打包好的Parcel数据发给binder驱动,即到此客户进程开始与binder驱动通信了

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;//将数据发送给binder驱动,并返回结果,结果存在 Parcel变量mIn中
        err = mIn.errorCheck();/*对返回的结果进行 检查*/
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;/*如果不需要返回结果的情况下继续下轮的循环*/
        cmd = (uint32_t)mIn.readInt32();
/*从返回回来的结果中读取cmd命令
然后分别有下面这几个case情况:
BR_TRANSACTION_COMPLETE
BR_DEAD_REPLY
BR_FAILED_REPLY
BR_ACQUIRE_RESULT
BR_REPLY
default
*/
        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
        
        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;

        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;
        
        case BR_ACQUIRE_RESULT:
            {
                ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;
        
        case BR_REPLY:
            {
                binder_transaction_data tr;/*因为binder 驱动如果需要返回结果,那么结果数据将会是用一个binder_transaction_data 结构体装的 */
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;
                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);/*将结果从binder_transaction_data 结构体取出来,填入reply中*/
                    } else {
                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer); /* 如果出错,将错误取出来*/
                        freeBuffer(NULL, reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), this);/*释放binder_transaction_data 结构体占用的空间*/
                    }
                } else {
                    freeBuffer(NULL,  reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), this);
                    continue;/*继续下一轮循环*/ 
                }
            }
            goto finish;/*退出循环,结束waitForResponse*/
        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }
finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }
    return err;
}

/*函数定义如下:
status_t talkWithDriver(bool doReceive=true);
即默认参数为true
*/
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD <= 0) {/*检查/dev/binder设备节点的文件句柄*/
        return -EBADF;
    }
    binder_write_read bwr;/*这个数据结构在service manager中也有用到,用来存储读写的数据*/
    
    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    
    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    
    bwr.write_size = outAvail; /*将需要传给binder驱动的数据存入结构体binder_write_read 中*/
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {/*如果需要binder驱动返回数据,指定返回数据存的地方*/
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }
    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR; /*如果既不写也不读 立即返回*/

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)/*开始向 binder驱动进行读写,ioctl函数由binder驱动提供*/
            err = NO_ERROR;
        else
            err = -errno;
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);/*一次循环走完就退出来了*/

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {/*binder驱动读取过传给它的数据*/
            if (bwr.write_consumed < mOut.dataSize())/*binder驱动将传给它的数据没有读完*/
                mOut.remove(0, bwr.write_consumed);/*将已经读完的数据去掉*/
            else
                mOut.setDataSize(0);/*binder驱动将传给它的数据已经读完,直接将mOut总数据量设置为0*/
        }
        if (bwr.read_consumed > 0) {/*binder驱动已经将返回的结果写入到了客户进程指定位置*/
            mIn.setDataSize(bwr.read_consumed);/*binder驱动写入量作为返回数据的大小*/
            mIn.setDataPosition(0);
        }
        return NO_ERROR;
    }
    return err;
}

可以看到客户端进程将请求从java层一步一步向下传到c++层,并按binder驱动能够识别,将请求数据进行包装,通过函数talkWithDriver与binder驱动真正进行数据通信并返回结果,其调用流程如下:

Created with Raphaël 2.2.0 request Service getService@ServiceManager.java getService@ServiceManagerProxy.java transact@BinderProxy.java transact@BpBinder.cpp transact@IPCThreadState.cpp waitForResponse@IPCThreadState.cpp talkWithDriver@IPCThreadState.cpp 系统调用ioctl函数 binder驱动

2. Binder 驱动

前面只看到了客户进程调用函数ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)向binder驱动进行数据的读和写,但是其内部怎么操作的还是不清楚,所以有必要看看它在bionic/libc/bionic/ioctl.cpp中的具体实现:

#include <sys/ioctl.h>
#include <stdarg.h>
extern "C" int __ioctl(int, int, void *);
int ioctl(int fd, int request, ...) {
  va_list ap;
  va_start(ap, request);
  void* arg = va_arg(ap, void*);
  va_end(ap);
  return __ioctl(fd, request, arg);/*系统调用*/
}

__ioctl函数最终会调用到binder驱动注册的binder_ioctl函数

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
 int ret;
 struct binder_proc *proc = filp->private_data;/*binder_open中创建的变量,该变量记录了该客户进程中所有与binder通信有关的信息*/
 struct binder_thread *thread;
 unsigned int size = _IOC_SIZE(cmd);
 void __user *ubuf = (void __user *)arg;

 /*pr_info("binder_ioctl: %d:%d %x %lx\n",
   proc->pid, current->pid, cmd, arg);*/

 trace_binder_ioctl(cmd, arg);
/*这个wait_event_interruptible函数实际是个宏定义,即条件binder_stop_on_user_error < 2满足,直接返回0,否则执行binder_user_error_wait即进程失去cpu调度并进入等待状态*/
 ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
 if (ret)
  goto err_unlocked;

 binder_lock(__func__);
 thread = binder_get_thread(proc);/*查询或者添加一个线程的节点*/
 if (thread == NULL) {
  ret = -ENOMEM;
  goto err;
 }
/*开始出了具体的命令*/
 switch (cmd) {
 case BINDER_WRITE_READ:
  ret = binder_ioctl_write_read(filp, cmd, arg, thread);
  if (ret)
   goto err;
  break;
/*此处省略*/
}
 ret = 0;
err:
 if (thread)
  thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
 binder_unlock(__func__);
 wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
 if (ret && ret != -ERESTARTSYS)
  pr_info("%d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
 trace_binder_ioctl_done(ret);
 return ret;
}
static int binder_open(struct inode *nodp, struct file *filp)/*主要创建一个binder_proc 变量,然后对该变量初始化*/
{
 struct binder_proc *proc;
 binder_debug(BINDER_DEBUG_OPEN_CLOSE, "binder_open: %d:%d\n",
       current->group_leader->pid, current->pid);

 proc = kzalloc(sizeof(*proc), GFP_KERNEL);
 if (proc == NULL)
  return -ENOMEM;
 get_task_struct(current);
 proc->tsk = current;
 INIT_LIST_HEAD(&proc->todo);
 init_waitqueue_head(&proc->wait);
 proc->default_priority = task_nice(current);

 binder_lock(__func__);

 binder_stats_created(BINDER_STAT_PROC);
 hlist_add_head(&proc->proc_node, &binder_procs);
 proc->pid = current->group_leader->pid;
 INIT_LIST_HEAD(&proc->delivered_death);
 filp->private_data = proc;

 binder_unlock(__func__);

 if (binder_debugfs_dir_entry_proc) {
  char strbuf[11];

  snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
  proc->debugfs_entry = debugfs_create_file(strbuf, S_IRUGO,
   binder_debugfs_dir_entry_proc, proc, &binder_proc_fops);
 }
 return 0;
}

接下来看看BINDER_WRITE_READ命令下做了哪些操作:

static int binder_ioctl_write_read(struct file *filp, unsigned int cmd, unsigned long arg,   struct binder_thread *thread)
{
 int ret = 0;
 struct binder_proc *proc = filp->private_data; /*拿到binder_open是创建的变量proc*/
 unsigned int size = _IOC_SIZE(cmd);
 void __user *ubuf = (void __user *)arg;
 struct binder_write_read bwr;

 if (size != sizeof(struct binder_write_read)) {/*判断buffer 大小是否符合要求*/
  ret = -EINVAL;
  goto out;
 }
 if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {//从用户空间复制数据到变量bwr中
  ret = -EFAULT;
  goto out;
 }
 if (bwr.write_size > 0) {/*如果bwr.write_size > 0 ,执行写数据*/
  ret = binder_thread_write(proc, thread,bwr.write_buffer, bwr.write_size,&bwr.write_consumed);
  trace_binder_write_done(ret);
  if (ret < 0) {
   bwr.read_consumed = 0;
   if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
    ret = -EFAULT;
   goto out;
  }
 }
 if (bwr.read_size > 0) {/*如果bwr.read_size > 0 ,执行读数据*/
  ret = binder_thread_read(proc, thread, bwr.read_buffer, bwr.read_size,   &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
  trace_binder_read_done(ret);
  if (!list_empty(&proc->todo))
   wake_up_interruptible(&proc->wait);
  if (ret < 0) {
   if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
    ret = -EFAULT;
   goto out;
  }
 }
 if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {/*将读到的数据从内核空间copy到用户空间,即参数arg指向的内存中*/
  ret = -EFAULT;
  goto out;
 }
out:
 return ret;
}

该函数主要做了:

  1. 将客户端向服务端请求的数据从用户空间copy到变量bwr中
  2. 如果有写数据请求,执行写数据
  3. 如果有读数据请求,执行读数据

2.1 向 binder驱动写数据

/*  ret = binder_thread_write(proc, thread,bwr.write_buffer, bwr.write_size,&bwr.write_consumed); */
static int binder_thread_write(struct binder_proc *proc,struct binder_thread *thread,binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed)
{
 uint32_t cmd;
 void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
 void __user *ptr = buffer + *consumed;/* 需要处理数据的起始位置*/
 void __user *end = buffer + size;/* 需要处理数据的终止位置*/

 while (ptr < end && thread->return_error == BR_OK) {/*循环的方法处理,直到数据终止位置*/
  if (get_user(cmd, (uint32_t __user *)ptr))/*获取cmd命令*/
   return -EFAULT;
  ptr += sizeof(uint32_t);/*起始位置跳过已经取出的cmd*/
  trace_binder_command(cmd);
  if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
   binder_stats.bc[_IOC_NR(cmd)]++;
   proc->stats.bc[_IOC_NR(cmd)]++;
   thread->stats.bc[_IOC_NR(cmd)]++;
  }
/*开始处理cmd命令,此时取出的 cmd = BC_TRANSACTION */
  switch (cmd) {
/*此处省略*/
  case BC_TRANSACTION:
  case BC_REPLY: {
   struct binder_transaction_data tr;
   if (copy_from_user(&tr, ptr, sizeof(tr)))/*从用户空间取出binder_transaction_data 结构数据*/
    return -EFAULT;
   ptr += sizeof(tr); /*将起始位置跳过已经拿到的binder_transaction_data 结构数据 */
   binder_transaction(proc, thread, &tr, cmd == BC_REPLY); /*执行具体的命令*/
   break;
  }
/*此处省略*/
   }
  *consumed = ptr - buffer;/*记录已经处理过的数据的指针*/
 }
 return 0;
}

然后看看函数binder_transaction的实现:

/*
proc 客户端进程的binder信息
thread 客户端线程
tr 向服务端请求的数据
reply cmd命令
*/
static void binder_transaction(struct binder_proc *proc,struct binder_thread *thread, struct binder_transaction_data *tr, int reply)
{
    struct binder_transaction *t;
    struct binder_work *tcomplete;
    binder_size_t *offp, *off_end;
    binder_size_t off_min;
    struct binder_proc *target_proc;
    struct binder_thread *target_thread = NULL;
    struct binder_node *target_node = NULL;
    struct list_head *target_list;
    wait_queue_head_t *target_wait;
    struct binder_transaction *in_reply_to = NULL;
    struct binder_transaction_log_entry *e;
    uint32_t return_error;
/*用来添加log*/
    e = binder_transaction_log_add(&binder_transaction_log);
    e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);
    e->from_proc = proc->pid;
    e->from_thread = thread->pid;
    e->target_handle = tr->target.handle;
    e->data_size = tr->data_size;
    e->offsets_size = tr->offsets_size;
/*在向sm发出请求时,cmd = BC_TRANSACTION,即reply为false*/
    if (reply) {
        in_reply_to = thread->transaction_stack;
        if (in_reply_to == NULL) {
            binder_user_error("%d:%d got reply transaction with no transaction stack\n",
                      proc->pid, thread->pid);
            return_error = BR_FAILED_REPLY;
            goto err_empty_call_stack;
        }
        binder_set_nice(in_reply_to->saved_priority);
        if (in_reply_to->to_thread != thread) {
            binder_user_error("%d:%d got reply transaction with bad transaction stack, transaction %d has target %d:%d\n",
                proc->pid, thread->pid, in_reply_to->debug_id,
                in_reply_to->to_proc ?
                in_reply_to->to_proc->pid : 0,
                in_reply_to->to_thread ?
                in_reply_to->to_thread->pid : 0);
            return_error = BR_FAILED_REPLY;
            in_reply_to = NULL;
            goto err_bad_call_stack;
        }
        thread->transaction_stack = in_reply_to->to_parent;
        target_thread = in_reply_to->from;
        if (target_thread == NULL) {
            return_error = BR_DEAD_REPLY;
            goto err_dead_binder;
        }
        if (target_thread->transaction_stack != in_reply_to) {
            binder_user_error("%d:%d got reply transaction with bad target transaction stack %d, expected %d\n",
                proc->pid, thread->pid,
                target_thread->transaction_stack ?
                target_thread->transaction_stack->debug_id : 0,
                in_reply_to->debug_id);
            return_error = BR_FAILED_REPLY;
            in_reply_to = NULL;
            target_thread = NULL;
            goto err_dead_binder;
        }
        target_proc = target_thread->proc;
    } else {
        if (tr->target.handle) {/*handle不为0*/
            struct binder_ref *ref;
            ref = binder_get_ref(proc, tr->target.handle, true);
            if (ref == NULL) {
                binder_user_error("%d:%d got transaction to invalid handle\n",proc->pid, thread->pid);
                return_error = BR_FAILED_REPLY;
                goto err_invalid_target_handle;
            }
            target_node = ref->node;
        } else {/*handle为0的情况,即目标服务就是serveice manager */
            target_node = binder_context_mgr_node;/*如果目标服务就是serveice manager 不查询直接使用全局变量binder_context_mgr_node */
            if (target_node == NULL) {
                return_error = BR_DEAD_REPLY;
                goto err_no_context_mgr_node;
            }
        }
        e->to_node = target_node->debug_id;
        target_proc = target_node->proc;/*通过target_node找到目标进程的binder信息*/
        if (target_proc == NULL) {
            return_error = BR_DEAD_REPLY;
            goto err_dead_binder;
        }
        if (security_binder_transaction(proc->tsk, target_proc->tsk) < 0) {
            return_error = BR_FAILED_REPLY;
            goto err_invalid_target_handle;
        }
        if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
            struct binder_transaction *tmp;

            tmp = thread->transaction_stack;
            if (tmp->to_thread != thread) {
                binder_user_error("%d:%d got new transaction with bad transaction stack, transaction %d has target %d:%d\n",proc->pid, thread->pid, tmp->debug_id, tmp->to_proc ? tmp->to_proc->pid : 0, tmp->to_thread ?tmp->to_thread->pid : 0);
                return_error = BR_FAILED_REPLY;
                goto err_bad_call_stack;
            }
            while (tmp) {/*循环遍历transaction栈*/
                if (tmp->from && tmp->from->proc == target_proc)
                    target_thread = tmp->from;/*找到目标线程*/
                tmp = tmp->from_parent;
            }
        }
    }
/*得到2个列表 分别是 tod 和 wait*/
    if (target_thread) {
        e->to_thread = target_thread->pid;
        target_list = &target_thread->todo;
        target_wait = &target_thread->wait;
    } else {
        target_list = &target_proc->todo;
        target_wait = &target_proc->wait;
    }
    e->to_proc = target_proc->pid;

    /* TODO: reuse incoming transaction for reply */
    t = kzalloc(sizeof(*t), GFP_KERNEL); /*为binder_transaction 变量t 分配空间*/
    if (t == NULL) {
        return_error = BR_FAILED_REPLY;
        goto err_alloc_t_failed;
    }
    binder_stats_created(BINDER_STAT_TRANSACTION);

    tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL); /*为binder_work 变量tcomplete  分配空间*/
    if (tcomplete == NULL) {
        return_error = BR_FAILED_REPLY;
        goto err_alloc_tcomplete_failed;
    }
    binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);

    t->debug_id = ++binder_last_id;
    e->debug_id = t->debug_id;
/*debug信息*/
    if (reply)
        binder_debug(BINDER_DEBUG_TRANSACTION,
                 "%d:%d BC_REPLY %d -> %d:%d, data %016llx-%016llx size %lld-%lld\n",
                 proc->pid, thread->pid, t->debug_id,
                 target_proc->pid, target_thread->pid,
                 (u64)tr->data.ptr.buffer,
                 (u64)tr->data.ptr.offsets,
                 (u64)tr->data_size, (u64)tr->offsets_size);
    else
        binder_debug(BINDER_DEBUG_TRANSACTION,
                 "%d:%d BC_TRANSACTION %d -> %d - node %d, data %016llx-%016llx size %lld-%lld\n",
                 proc->pid, thread->pid, t->debug_id,
                 target_proc->pid, target_node->debug_id,
                 (u64)tr->data.ptr.buffer,
                 (u64)tr->data.ptr.offsets,
                 (u64)tr->data_size, (u64)tr->offsets_size);

/*给binder_transaction 变量t  填充内容*/
    if (!reply && !(tr->flags & TF_ONE_WAY))
        t->from = thread;/*transaction发起的线程*/
    else
        t->from = NULL;
    t->sender_euid = task_euid(proc->tsk);
    t->to_proc = target_proc;
    t->to_thread = target_thread;
    t->code = tr->code;/*向目标发起的请求码*/
    t->flags = tr->flags;
    t->priority = task_nice(current);

    trace_binder_transaction(reply, t, target_node);

    t->buffer = binder_alloc_buf(target_proc, tr->data_size,tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));/*分配一块内存空间,大小为请求数据的大小,该空间对应的物理空间和 请求的服务进程的物理空间是同一块  。类型是binder_buffer*/
    if (t->buffer == NULL) {
        return_error = BR_FAILED_REPLY;
        goto err_binder_alloc_buf_failed;
    }
    t->buffer->allow_user_free = 0;
    t->buffer->debug_id = t->debug_id;
    t->buffer->transaction = t;
    t->buffer->target_node = target_node;
    trace_binder_transaction_alloc_buf(t->buffer);
    if (target_node)
        binder_inc_node(target_node, 1, 0, NULL);

    offp = (binder_size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));
/*将请求的数据 copy到t->buffer内存中,需要注意的是t->buffer内存对应的物理内存和服务进程的物理内存是同一块 ,简点理解就是将客户进程的数据直接copy到了服务进程的内存空间中,这样服务进程就可以直接访问这些数据了*/
    if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)tr->data.ptr.buffer, tr->data_size)) {
        binder_user_error("%d:%d got transaction with invalid data ptr\n", proc->pid, thread->pid);
        return_error = BR_FAILED_REPLY;
        goto err_copy_data_failed;
    }
    if (copy_from_user(offp, (const void __user *)(uintptr_t) tr->data.ptr.offsets, tr->offsets_size)) {
        binder_user_error("%d:%d got transaction with invalid offsets ptr\n", proc->pid, thread->pid);
        return_error = BR_FAILED_REPLY;
        goto err_copy_data_failed;
    }
    if (!IS_ALIGNED(tr->offsets_size, sizeof(binder_size_t))) {/*交错处理*/
        binder_user_error("%d:%d got transaction with invalid offsets size, %lld\n", proc->pid, thread->pid, (u64)tr->offsets_size);
        return_error = BR_FAILED_REPLY;
        goto err_bad_offset;
    }
    off_end = (void *)offp + tr->offsets_size;
    off_min = 0;
    for (; offp < off_end; offp++) {
        struct flat_binder_object *fp;

        if (*offp > t->buffer->data_size - sizeof(*fp) ||*offp < off_min ||t->buffer->data_size < sizeof(*fp) || !IS_ALIGNED(*offp, sizeof(u32))) {
            binder_user_error("%d:%d got transaction with invalid offset, %lld (min %lld, max %lld)\n",
                      proc->pid, thread->pid, (u64)*offp,
                      (u64)off_min,
                      (u64)(t->buffer->data_size -
                      sizeof(*fp)));
            return_error = BR_FAILED_REPLY;
            goto err_bad_offset;
        }
        fp = (struct flat_binder_object *)(t->buffer->data + *offp);
        off_min = *offp + sizeof(struct flat_binder_object);
        switch (fp->type) {
        case BINDER_TYPE_BINDER:
        case BINDER_TYPE_WEAK_BINDER: {
            struct binder_ref *ref;
            struct binder_node *node = binder_get_node(proc, fp->binder);

            if (node == NULL) {
                node = binder_new_node(proc, fp->binder, fp->cookie);
                if (node == NULL) {
                    return_error = BR_FAILED_REPLY;
                    goto err_binder_new_node_failed;
                }
                node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
                node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
            }
            if (fp->cookie != node->cookie) {
                binder_user_error("%d:%d sending u%016llx node %d, cookie mismatch %016llx != %016llx\n",
                    proc->pid, thread->pid,
                    (u64)fp->binder, node->debug_id,
                    (u64)fp->cookie, (u64)node->cookie);
                return_error = BR_FAILED_REPLY;
                goto err_binder_get_ref_for_node_failed;
            }
    if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {
                return_error = BR_FAILED_REPLY;
                goto err_binder_get_ref_for_node_failed;
            }
            ref = binder_get_ref_for_node(target_proc, node);
            if (ref == NULL) {
                return_error = BR_FAILED_REPLY;
                goto err_binder_get_ref_for_node_failed;
            }
            if (fp->type == BINDER_TYPE_BINDER)
                fp->type = BINDER_TYPE_HANDLE;
            else
                fp->type = BINDER_TYPE_WEAK_HANDLE;
            fp->binder = 0;
            fp->handle = ref->desc;
            fp->cookie = 0;
            binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
                       &thread->todo);

            trace_binder_transaction_node_to_ref(t, node, ref);
            binder_debug(BINDER_DEBUG_TRANSACTION,
                     "        node %d u%016llx -> ref %d desc %d\n",
                     node->debug_id, (u64)node->ptr,
                     ref->debug_id, ref->desc);
        } break;
        case BINDER_TYPE_HANDLE:
        case BINDER_TYPE_WEAK_HANDLE: {
/*
1. 查找binder_object对应的binder_ref
2. 根据目标进程是否是与自己是同一个进程做不同处理,如果是同一个进程
*/
            struct binder_ref *ref = binder_get_ref(proc, fp->handle, fp->type == BINDER_TYPE_HANDLE);
            if (ref == NULL) {
                binder_user_error("%d:%d got transaction with invalid handle, %d\n",
                        proc->pid,
                        thread->pid, fp->handle);
                return_error = BR_FAILED_REPLY;
                goto err_binder_get_ref_failed;
            }
            if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {
                return_error = BR_FAILED_REPLY;
                goto err_binder_get_ref_failed;
            }
            if (ref->node->proc == target_proc) {
                if (fp->type == BINDER_TYPE_HANDLE)
                    fp->type = BINDER_TYPE_BINDER;
                else
                    fp->type = BINDER_TYPE_WEAK_BINDER;
                fp->binder = ref->node->ptr;
                fp->cookie = ref->node->cookie;
                binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL);
                trace_binder_transaction_ref_to_node(t, ref);
                binder_debug(BINDER_DEBUG_TRANSACTION, "  ref %d desc %d -> node %d u%016llx\n",ref->debug_id, ref->desc, ref->node->debug_id,(u64)ref->node->ptr);
            } else {
                struct binder_ref *new_ref;
                new_ref = binder_get_ref_for_node(target_proc, ref->node);
                if (new_ref == NULL) {
                    return_error = BR_FAILED_REPLY;
                    goto err_binder_get_ref_for_node_failed;
                }
                fp->binder = 0;
                fp->handle = new_ref->desc;
                fp->cookie = 0;
                binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);
                trace_binder_transaction_ref_to_ref(t, ref,new_ref);
                binder_debug(BINDER_DEBUG_TRANSACTION,   "        ref %d desc %d -> ref %d desc %d (node %d)\n", ref->debug_id, ref->desc, new_ref->debug_id,new_ref->desc, ref->node->debug_id);
            }
        } break;

        case BINDER_TYPE_FD: {
            int target_fd;
            struct file *file;

            if (reply) {
                if (!(in_reply_to->flags & TF_ACCEPT_FDS)) {
                    binder_user_error("%d:%d got reply with fd, %d, but target does not allow fds\n",
                        proc->pid, thread->pid, fp->handle);
                    return_error = BR_FAILED_REPLY;
                    goto err_fd_not_allowed;
                }
            } else if (!target_node->accept_fds) {
                binder_user_error("%d:%d got transaction with fd, %d, but target does not allow fds\n",
                    proc->pid, thread->pid, fp->handle);
                return_error = BR_FAILED_REPLY;
                goto err_fd_not_allowed;
            }

            file = fget(fp->handle);
            if (file == NULL) {
                binder_user_error("%d:%d got transaction with invalid fd, %d\n",
                    proc->pid, thread->pid, fp->handle);
                return_error = BR_FAILED_REPLY;
                goto err_fget_failed;
            }
            if (security_binder_transfer_file(proc->tsk, target_proc->tsk, file) < 0) {
                fput(file);
                return_error = BR_FAILED_REPLY;
                goto err_get_unused_fd_failed;
            }
            target_fd = task_get_unused_fd_flags(target_proc, O_CLOEXEC);
            if (target_fd < 0) {
                fput(file);
                return_error = BR_FAILED_REPLY;
                goto err_get_unused_fd_failed;
            }
            task_fd_install(target_proc, target_fd, file);
            trace_binder_transaction_fd(t, fp->handle, target_fd);
            binder_debug(BINDER_DEBUG_TRANSACTION,
                     "        fd %d -> %d\n", fp->handle, target_fd);
            /* TODO: fput? */
            fp->binder = 0;
            fp->handle = target_fd;
        } break;

        default:
            binder_user_error("%d:%d got transaction with invalid object type, %x\n",
                proc->pid, thread->pid, fp->type);
            return_error = BR_FAILED_REPLY;
            goto err_bad_object_type;
        }
    }
    if (reply) {/*当前reply还是false*/
        BUG_ON(t->buffer->async_transaction != 0);
        binder_pop_transaction(target_thread, in_reply_to);
    } else if (!(t->flags & TF_ONE_WAY)) {
        BUG_ON(t->buffer->async_transaction != 0);
        t->need_reply = 1;/*记录本次 transaction */
        t->from_parent = thread->transaction_stack;
        thread->transaction_stack = t;
    } else {
        BUG_ON(target_node == NULL);
        BUG_ON(t->buffer->async_transaction != 1);
        if (target_node->has_async_transaction) {
            target_list = &target_node->async_todo;
            target_wait = NULL;
        } else
            target_node->has_async_transaction = 1;
    }
    t->work.type = BINDER_WORK_TRANSACTION;
    list_add_tail(&t->work.entry, target_list);/*加入target_list队列,即目标进程的todo队列*/
    tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
    list_add_tail(&tcomplete->entry, &thread->todo);/*加入客户线程 todo队列,即还有个未完成的操作*/
    if (target_wait) {
        if (reply || !(t->flags & TF_ONE_WAY)) {
            preempt_disable();
            wake_up_interruptible_sync(target_wait);/*唤醒服务进程*/
            preempt_enable_no_resched();
        } else {
            wake_up_interruptible(target_wait);
        }
    }
    return;

err_get_unused_fd_failed:
err_fget_failed:
err_fd_not_allowed:
err_binder_get_ref_for_node_failed:
err_binder_get_ref_failed:
err_binder_new_node_failed:
err_bad_object_type:
err_bad_offset:
err_copy_data_failed:
    trace_binder_transaction_failed_buffer_release(t->buffer);
    binder_transaction_buffer_release(target_proc, t->buffer, offp);
    t->buffer->transaction = NULL;
    binder_free_buf(target_proc, t->buffer);
err_binder_alloc_buf_failed:
    kfree(tcomplete);
    binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
err_alloc_tcomplete_failed:
    kfree(t);
    binder_stats_deleted(BINDER_STAT_TRANSACTION);
err_alloc_t_failed:
err_bad_call_stack:
err_empty_call_stack:
err_dead_binder:
err_invalid_target_handle:
err_no_context_mgr_node:
    binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
             "%d:%d transaction failed %d, size %lld-%lld\n",
             proc->pid, thread->pid, return_error,
             (u64)tr->data_size, (u64)tr->offsets_size);
    {
        struct binder_transaction_log_entry *fe;

        fe = binder_transaction_log_add(&binder_transaction_log_failed);
        *fe = *e;
    }

    BUG_ON(thread->return_error != BR_OK);
    if (in_reply_to) {
        thread->return_error = BR_TRANSACTION_COMPLETE;
        binder_send_failed_reply(in_reply_to, return_error);
    } else
        thread->return_error = return_error;
}

2.2 向 binder驱动读数据

static int binder_thread_read(struct binder_proc *proc,struct binder_thread *thread,binder_uintptr_t binder_buffer, size_t size,binder_size_t *consumed, int non_block)
{
 void __user *buffer = (void __user *)(uintptr_t)binder_buffer;/*数据需要写入的空间*/
 void __user *ptr = buffer + *consumed;/*数据的起始位置*/
 void __user *end = buffer + size;/*数据的结束位置*/

 int ret = 0;
 int wait_for_proc_work;

 if (*consumed == 0) {
  if (put_user(BR_NOOP, (uint32_t __user *)ptr))/*  如果刚开始读,则先写入一个BR_NOOP*/
   return -EFAULT;
  ptr += sizeof(uint32_t);
 }

retry:
 wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);

 if (thread->return_error != BR_OK && ptr < end) {
  if (thread->return_error2 != BR_OK) {
   if (put_user(thread->return_error2, (uint32_t __user *)ptr))
    return -EFAULT;
   ptr += sizeof(uint32_t);
   binder_stat_br(proc, thread, thread->return_error2);
   if (ptr == end)
    goto done;
   thread->return_error2 = BR_OK;
  }
  if (put_user(thread->return_error, (uint32_t __user *)ptr))
   return -EFAULT;
  ptr += sizeof(uint32_t);
  binder_stat_br(proc, thread, thread->return_error);
  thread->return_error = BR_OK;
  goto done;
 }


 thread->looper |= BINDER_LOOPER_STATE_WAITING;
 if (wait_for_proc_work)
  proc->ready_threads++;

 binder_unlock(__func__);

 trace_binder_wait_for_work(wait_for_proc_work,
       !!thread->transaction_stack,
       !list_empty(&thread->todo));
 if (wait_for_proc_work) {
  if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
     BINDER_LOOPER_STATE_ENTERED))) {
   binder_user_error("%d:%d ERROR: Thread waiting for process work before calling BC_REGISTER_LOOPER or BC_ENTER_LOOPER (state %x)\n",
    proc->pid, thread->pid, thread->looper);
   wait_event_interruptible(binder_user_error_wait,
       binder_stop_on_user_error < 2);
  }
  binder_set_nice(proc->default_priority);
  if (non_block) {
   if (!binder_has_proc_work(proc, thread))
    ret = -EAGAIN;
  } else
   ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
 } else {
  if (non_block) {
   if (!binder_has_thread_work(thread))
    ret = -EAGAIN;
  } else
   ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));/*一般运行到此处进程就开始睡眠,直到服务进程将返回结果写入到本进程的内存中并唤醒自己 逻辑再次从此处向下走*/
 }

 binder_lock(__func__);

 if (wait_for_proc_work)
  proc->ready_threads--;
 thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

 if (ret)
  return ret;

 while (1) {
  uint32_t cmd;
  struct binder_transaction_data tr;
  struct binder_work *w;
  struct binder_transaction *t = NULL;

  if (!list_empty(&thread->todo)) {/*此时thread->todo不为空*/
   w = list_first_entry(&thread->todo, struct binder_work,entry);
  } else if (!list_empty(&proc->todo) && wait_for_proc_work) {
   w = list_first_entry(&proc->todo, struct binder_work,
          entry);
  } else {
   /* no data added */
   if (ptr - buffer == 4 &&
       !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN))
    goto retry;
   break;
  }

  if (end - ptr < sizeof(tr) + 4)/*空间大小检查*/
   break;

  switch (w->type) {/*此时w->type=BINDER_WORK_TRANSACTION_COMPLETE*/
  case BINDER_WORK_TRANSACTION: {
   t = container_of(w, struct binder_transaction, work);
  } break;
  case BINDER_WORK_TRANSACTION_COMPLETE: {
   cmd = BR_TRANSACTION_COMPLETE;
   if (put_user(cmd, (uint32_t __user *)ptr))/*将cmd BR_TRANSACTION_COMPLETE写入读取数据的空间*/
    return -EFAULT;
   ptr += sizeof(uint32_t);/*指针继续向后移动一个uint32_t位置*/

   binder_stat_br(proc, thread, cmd);
   binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE, "%d:%d BR_TRANSACTION_COMPLETE\n",proc->pid, thread->pid);

   list_del(&w->entry);/*删除当前已经处理完的entry*/
   kfree(w);
   binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
  } break;/*至此 binder_buffer空间里面存了2个cmd命令   BR_NOOP && BR_TRANSACTION_COMPLETE*/
}
 /* 此处省略*/
  BUG_ON(t->buffer == NULL);
  if (t->buffer->target_node) {
   struct binder_node *target_node = t->buffer->target_node;
   tr.target.ptr = target_node->ptr;
   tr.cookie = target_node->cookie;
   t->saved_priority = task_nice(current);
   if (t->priority < target_node->min_priority &&
       !(t->flags & TF_ONE_WAY))
    binder_set_nice(t->priority);
   else if (!(t->flags & TF_ONE_WAY) ||t->saved_priority > target_node->min_priority)
    binder_set_nice(target_node->min_priority);
   cmd = BR_TRANSACTION;
  } else {
   tr.target.ptr = 0;
   tr.cookie = 0;
   cmd = BR_REPLY;
  }
  tr.code = t->code;
  tr.flags = t->flags;
  tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);
  if (t->from) {
   struct task_struct *sender = t->from->proc->tsk;

   tr.sender_pid = task_tgid_nr_ns(sender,
       task_active_pid_ns(current));
  } else {
   tr.sender_pid = 0;
  }

  tr.data_size = t->buffer->data_size;/*数据大小*/
  tr.offsets_size = t->buffer->offsets_size;
  tr.data.ptr.buffer = (binder_uintptr_t)( (uintptr_t)t->buffer->data + proc->user_buffer_offset);/*数据存储的地址*/
  tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size,sizeof(void *));

  if (put_user(cmd, (uint32_t __user *)ptr))
   return -EFAULT;
  ptr += sizeof(uint32_t);
  if (copy_to_user(ptr, &tr, sizeof(tr)))/*将tr数据copy到ptr指向位置*/
   return -EFAULT;
  ptr += sizeof(tr);/*指针跳过一个tr大小的位置*/

  trace_binder_transaction_received(t);
  binder_stat_br(proc, thread, cmd);

  list_del(&t->work.entry);/*删除t*/
  t->buffer->allow_user_free = 1;
  if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
   t->to_parent = thread->transaction_stack;
   t->to_thread = thread;
   thread->transaction_stack = t;
  } else {
   t->buffer->transaction = NULL;
   kfree(t);
   binder_stats_deleted(BINDER_STAT_TRANSACTION);

}/*然后整个binder_ioctl函数一轮的循环执行完*/
 return 0;
}

参考 : Binder 通信笔记(Java) 《深入理解android 内核设计思想》

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值