文章仅仅用于个人的学习记录,基本上内容都是网上各个大神的杰作,此处摘录过来以自己的理解学习方式记录一下。
个人最为认可和推崇的大神文章:
http://blog.youkuaiyun.com/innost/article/details/47208049 Innost的Binder讲解
https://my.oschina.net/youranhongcha/blog/149575 侯 亮的Binder系列文章.
1、binder_state binder设备文件的状态。
在service manager服务端来看,定义在:frameworks/base/cmds/servicemanager/binder.c
struct binder_state
{
int fd; //打开的文件描述符
void *mapped; //通过mmap把"/dev/binder"设备文件映射到进程虚拟空间的地址(用户空间)
unsigned mapsize;
};
一般用来记录:open("/dev/binder")打开binder设备的句柄以及通过mmap把该设备文件映射到进程的用户空间的起始地址。
2、binder_proc : 用来保存打开设备文件"/dev/binder"的进程的上下文,(就是保存进程信息)。受限于每个进程ProcessState的单例
模式的设计,每个进程只能打开一次Binder设备,所以会有一个binder_proc,来描述当前的进程的信息。
定义在binder驱动binder.c当中.
struct binder_proc {
struct hlist_node proc_node;//连入总链表的点.
struct rb_root threads; // 红黑树的节点,(不理解红黑树结构,暂时就当成该存储数据的地方即可)
struct rb_root nodes;
struct rb_root refs_by_desc;
struct rb_root refs_by_node;
int pid; //进程的id.
struct vm_area_struct *vma;
struct mm_struct *vma_vm_mm;
struct task_struct *tsk;
struct files_struct *files;
struct hlist_node deferred_work_node;
int deferred_work;
void *buffer;//表示要映射的物理内存在内核空间中的起始位置
//内核使用的虚拟地址与进程使用的虚拟地址之间的差值,即如果某个物理页面在内核空间中对应的虚拟地址是addr的话,
//那么这个物理页面在进程空间对应的虚拟地址就为addr + user_buffer_offset
ptrdiff_t user_buffer_offset;
struct list_head buffers;//通过mmap映射的内存空间.
struct rb_root free_buffers;//空闲的binder_buffer通过成员变量rb_node连入到struct binder_proc中的free_buffers表示的红黑树中去,
struct rb_root allocated_buffers;//正在使用的binder_buffer通过成员变量rb_node连入到struct binder_proc中的allocated_buffers表 示的红黑树中去。
size_t free_async_space;
struct page **pages;// struct page 用来描述物理页面的数据结构
size_t buffer_size; //表示要映射的内存的大小.
uint32_t buffer_free;
struct list_head todo;
wait_queue_head_t wait;
struct binder_stats stats;
struct list_head delivered_death;
int max_threads;
int requested_threads;
int requested_threads_started;
int ready_threads;
long default_priority;
struct dentry *debugfs_entry;
};
binder_proc通过threads、nodes、refs_by_desc、refs_by_node分别挂在四个红黑树下.比如binder_node,它内部含有rb_node
的节点,然后通过这个节点连入相关的树。(ref:refrence引用,s:表示复数,node:实体节点)
threads树 : 用来保存binder_proc进程内用于处理用户请求的线程,它的最大数量由max_threads来决定。
nodes树 : 用来保存binder_proc进程内的Binder实体,这些实体就是指被别的进程跨进程调用的binder实体。
refs_by_desc: 用来保存binder_proc进程内的Binder引用(此时的引用就是对应到别的进程中的binder实体),这个引用
的key = handle()句柄值 , value是远程的binder实体的引用. 常用!
refs_by_node : 也是保存远程binder实体对应的binder引用,key = 以引用的实体节点的地址值。value =
远程的binder实
体的引用。之所以设计两个应该是为了方便查找。
3、binder_node :描述一个Binder实体,注意这个binder实体是在驱动也就是内核中表示一个Binder。
定义在binder驱动binder.c当中.
struct binder_node {
int debug_id;
struct binder_work work;
union {
struct rb_node rb_node;
struct hlist_node dead_node;
};
struct binder_proc *proc;
struct hlist_head refs;
int internal_strong_refs;
int local_weak_refs;
int local_strong_refs;
void __user *ptr;
void __user *cookie;
unsigned has_strong_ref : 1;
unsigned pending_strong_ref : 1;
unsigned has_weak_ref : 1;
unsigned pending_weak_ref : 1;
unsigned has_async_transaction : 1;
unsigned accept_fds : 1;
int min_priority : 8;
struct list_head async_todo;
};
rb_node和dead_node组成一个联合体。 如果这个Binder实体还在正常使用,则使用rb_node来连入proc->nodes所表示的
红黑树的节点;如果这个Binder实体所属的进程已经销毁,而这个Binder实体又被其它进程所引用,则这个Binder实体通过
dead_node进入到一个哈希表中去存放。
proc :表示这个Binder实体所属于进程了,也就是所属的binder_proc.
refs : 把所有引用了该Binder实体的Binder引用连接起来构成一个链表.
ptr :
这个Binder实体在用户空间的地址,?????有疑问的!!!
cookie :这个Binder实体在用户空间附加数据.
4、binder_ref : 描述一个Binder实体的引用。(底层用binder实体和binder引用来相对的称呼,相当于用户空间binder代理)
定义在binder驱动binder.c当中.
struct binder_ref {//就是 refs_by_desc、refs_by_node 这两个引用树种的数据结构.
int debug_id;
struct rb_node rb_node_desc;//连接des的引用树.
struct rb_node rb_node_node;//连接node的引用树.
struct hlist_node node_entry;
struct binder_proc *proc; //该binder引用所属的进程,
struct binder_node *node;//和远程的binder实体binder_node关联的地方.相对应.
uint32_t desc;
int strong;
int weak;
struct binder_ref_death *death;
};
5、
binder_buffer : 描述一段通过mmap映射的内存空间,
定义在binder驱动binder.c当中.
binder驱动程序管理这个内存映射地址空间的方法,即管理buffer ~ (buffer + buffer_size)这段地址空间的,这
个地址空间
被划分为一段一段来管理,每一段是结构体struct binder_buffer来描述.
每一个binder_buffer通过其成员entry按从低址到高地址连入到struct binder_proc中的buffers表示的链表中去.
struct binder_buffer {
struct list_head entry; //连入 binder_proc的buffers
//空闲的binder_buffer通过成员变量rb_node连入到binder_proc中的free_buffers表示的红黑树中.
//正在使用的binder_buffer通过成员变量rb_node连入到binder_proc中的allocated_buffers表示的红黑树中去。
struct rb_node rb_node;
unsigned free:1; //每一个binder_buffer又分为正在使用的和空闲的,通过free成员变量来区分.
unsigned allow_user_free:1;
unsigned async_transaction:1;
unsigned debug_id:29;
struct binder_transaction *transaction;
struct binder_node *target_node;
size_t data_size;
size_t offsets_size;
uint8_t data[0];
};
6、binder_thread : 描述执行当前binder通信事物的线程,在驱动中定义。
线程的一些状态.
enum {
BINDER_LOOPER_STATE_REGISTERED = 0x01,
BINDER_LOOPER_STATE_ENTERED = 0x02,
BINDER_LOOPER_STATE_EXITED = 0x04,
BINDER_LOOPER_STATE_INVALID = 0x08,
BINDER_LOOPER_STATE_WAITING = 0x10,
BINDER_LOOPER_STATE_NEED_RETURN = 0x20
};
struct binder_thread {
struct binder_proc *proc; //当前线程所属的进程。
struct rb_node rb_node; //来连入binder_proc的threads红黑树.
int pid;
int looper;//表示线程的状态 就是上面enum的类型。
struct binder_transaction *transaction_stack; //表示线程正在处理的事务
struct list_head todo; //表示发往该线程的数据列表待处理的一次通信事务.
uint32_t return_error; /* Write failed, return error code in read buf */
uint32_t return_error2; /* Write failed, return error code in read */
/* buffer. Used when sending a reply to a dead process that */
/* we are also waiting on */
wait_queue_head_t wait; //用来阻塞线程等待某个事件的发生
struct binder_stats stats; //用来保存一些统计信息
};
7、在kernel中,感觉算是liunx中google添加支持的。
头文件#include <uapi/linux/android/binder.h>里面有很多重要的数据结构或者方法
1)、设备文件/dev/binder文件操作函数ioctl,执行io操作时都是用的合格封装,用户空间和内核空间都是,最终还的相互转换
注意write要发送的,read要读取的,一般啊根据这两个buffer的不同作出不同的处理。write_bufffer和read_buffer所指向
的结构体是struct binder_transaction_data,里面封装了一次通信事务。
struct binder_write_read {
binder_size_t write_size; /* bytes to write */
binder_size_t write_consumed; /* bytes consumed by driver */
binder_uintptr_t write_buffer;
binder_size_t read_size; /* bytes to read */
binder_size_t read_consumed; /* bytes consumed by driver */
binder_uintptr_t read_buffer;
};
2)、binder_transaction_data :可以理解为一次binder通信事务,以及里面所携带的数据parcel的封装等等。
enum transaction_flags {
TF_ONE_WAY = 0x01, /* this is a one-way call: async, no return */
TF_ROOT_OBJECT = 0x04, /* contents are the component's root object */
TF_STATUS_CODE = 0x08, /* contents are a 32-bit status code */
TF_ACCEPT_FDS = 0x10, /* allow replies with file descriptors */
};
struct binder_transaction_data {
/* The first two are only used for bcTRANSACTION and brTRANSACTION,
* identifying the target and contents of the transaction.
*/
union {
/* target descriptor of command transaction */
__u32 handle;//当通信命令的目标对象不是本地Binder实体时,使用handle来表示这个Binder实体的引用
/* target descriptor of return transaction */
binder_uintptr_t ptr;
//当通信命令的目标对象是本地Binder实体时,就使用ptr来表示这个对象在本进程中的地址.} target;
//只有目标对象是Binder实体时,cookie成员变量才有意义,表示一些附加数据,由Binder实体来解释这个个附加数据.
binder_uintptr_t cookie;
__u32 code;//通信的命令码如:BC_XXXX、BR_XXXXX等等.
/* General information about the transaction. */
__u32 flags; //前面列举的flags
pid_t sender_pid;
uid_t sender_euid;
binder_size_t data_size; //
data.buffer缓冲区的大小binder_size_t offsets_size; //表示data.offsets缓冲区的大小
/* If this transaction is inline, the data immediately
* follows here; otherwise, it ends with a pointer to
* the data buffer.
*/
union {
struct {
/* transaction data */
binder_uintptr_t buffer;//真正要传输的数据保存的地方.
/* offsets from buffer to flat_binder_object structs */
binder_uintptr_t offsets;
} ptr;
__u8 buf[8];
} data;
};
data成员变量,命令的真正要传输的数据就保存在data.buffer缓冲区中,前面的一成员变量都是一些用来描述数据的特征
的。data.buffer所表示的缓冲区数据分为两类,一类是普通数据,Binder驱动程序不关心,一类是Binder实体或者Binder引
用,这需要Binder驱动程序介入处理。为什么呢?想想,如果一个进程A传递了一个Binder实体或Binder引用给进程B,那么,
Binder驱动程序就需要介入维护这个Binder实体或者引用的引用计数,防止B进程还在使用这个Binder实体时,A却销毁这个实
体,这样的话,B进程就会crash了。所以在传输数据时,如果数据中含有Binder实体和Binder引和,就需要告诉Binder驱动程
序它们的具体位置,以便Binder驱动程序能够去维护它们。data.offsets的作用就在这里了,它指定在data.buffer缓冲区中,所
有Binder实体或者引用的偏移位置。每一个Binder实体或者引用,通过struct flat_binder_object 来表示,注意这个描述binder实
体的结构体和binder_node的区别,这个只是描述的传输过程中的。
3)、flat_binder_object :传输过程中的每一个Binder实体或者引用,通过struct flat_binder_object 来表示.
struct flat_binder_object {
/* 8 bytes for large_flat_header. */
__u32 type;
__u32 flags;
/* 8 bytes of data. */
union {
binder_uintptr_t binder; //
binder表示这是一个Binder实体__u32 handle; //
handle表示这是一个Binder引用.};
/* extra data associated with local object */
binder_uintptr_t cookie;//当这是一个Binder实体时,cookie才有意义,表示附加数据,由进程自己解释
};
8、用户空间和binder驱动交互时的ioctl函数的整体解析.
首先他们会走到kernel/drivers/android/binder.c中的binder_ioctl函数.然后会通过条件判断走入到BINDER_WRITE_READ分支。
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
void __user *ubuf = (void __user *)arg;//用户空间数据的地址.记住的了,反正用户空间ioctl调用一般的格式如下:
(ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) ,这样在最后一个参数传入bwr结构体的指针,然后
调入到驱动层以后,就
是驱动的
binder_ioctl的最后的参数arg.
case BINDER_WRITE_READ: //调用BINDER_WRITE_READ进入到这个命令中
ret = binder_ioctl_write_read(filp, cmd, arg, thread);
if (ret)
goto err;
break;
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;//取出来发起传输动作的进程.感觉都是service在操作,客户端没打开进程.
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;//这是user中的数据吗.
struct binder_write_read bwr;
//......
//用户空间传递进来的数据,转换成内核中的struct binder_write_read结构体,以便可以认识.
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
//......
//此时的bwr就是用户空间封装好传过来的,看好哪个缓冲区有数据,哪个缓冲区没有数据。注意有时候两个缓冲区都有数据。
//进入到相应的判断即可.
if (bwr.write_size > 0) {//bwr.write_size大于0时候会执行这个
ret = binder_thread_write(proc, thread,
bwr.write_buffer,
bwr.write_size,
&bwr.write_consumed);
//......
}
if (bwr.read_size > 0) {//bwr.read_size大于0的时候会走这里,注意有时候write大于0,read也可能大于0.
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
//......
}
//......
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
out:
return ret;
}
可以看到主要的是通过判断由用户空间传入的bwr结构体中的不同缓冲区write、read中的不同数据的大小来实现走入到不同的分支.
最主要的就是两个函数
binder_thread_write、
binder_thread_read了,感觉可以理解为最终处理用户空间的各种请求的最核心的枢纽.
int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;//
cmd用来取出用户空间在 writeTransactionData参数中第一个参数cmd.当是就是直接写入到了mOut当中void __user *buffer = (void __user *)(uintptr_t)binder_buffer;//这个buffer中又用户空间的binder—_transaction_data
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (ptr < end && thread->return_error == BR_OK) {
if (get_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);//注意这个mOut中可能有好几个cmd.
trace_binder_command(cmd);
//......
switch (cmd) {//分别是不同的分支.
case BC_INCREFS:
case BC_ACQUIRE:
case BC_RELEASE:
case BC_DECREFS: {
uint32_t target;
struct binder_ref *ref;
const char *debug_string;
if (get_user(target, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
if (target == 0 && binder_context_mgr_node &&
(cmd == BC_INCREFS || cmd == BC_ACQUIRE)) {
ref = binder_get_ref_for_node(proc,
binder_context_mgr_node);
//......
} else
ref = binder_get_ref(proc, target);
//......
switch (cmd) {
case BC_INCREFS:
debug_string = "IncRefs";
binder_inc_ref(ref, 0, NULL);
break;
case BC_ACQUIRE:
debug_string = "Acquire";
binder_inc_ref(ref, 1, NULL);
break;
case BC_RELEASE:
debug_string = "Release";
binder_dec_ref(&ref, 1);
break;
case BC_DECREFS:
default:
debug_string = "DecRefs";
binder_dec_ref(&ref, 0);
break;
}
//......
break;
}
case BC_INCREFS_DONE:
case BC_ACQUIRE_DONE: {
binder_uintptr_t node_ptr;
binder_uintptr_t cookie;
struct binder_node *node;
if (get_user(node_ptr, (binder_uintptr_t __user *)ptr))
return -EFAULT;
ptr += sizeof(binder_uintptr_t);
if (get_user(cookie, (binder_uintptr_t __user *)ptr))
return -EFAULT;
ptr += sizeof(binder_uintptr_t);
node = binder_get_node(proc, node_ptr);
if (node == NULL) {
//......
}if (cookie != node->cookie) {
//......
}
if (cmd == BC_ACQUIRE_DONE) {
if (node->pending_strong_ref == 0) {
binder_user_error("%d:%d BC_ACQUIRE_DONE node %d has no pending acquire request\n",
proc->pid, thread->pid,
node->debug_id);
break;
}
node->pending_strong_ref = 0;
} else {
if (node->pending_weak_ref == 0) {
binder_user_error("%d:%d BC_INCREFS_DONE node %d has no pending increfs request\n",
proc->pid, thread->pid,
node->debug_id);
break;
}
node->pending_weak_ref = 0;
}
binder_dec_node(node, cmd == BC_ACQUIRE_DONE, 0);
binder_debug(BINDER_DEBUG_USER_REFS,
"%d:%d %s node %d ls %d lw %d\n",
proc->pid, thread->pid,
cmd == BC_INCREFS_DONE ? "BC_INCREFS_DONE" : "BC_ACQUIRE_DONE",
node->debug_id, node->local_strong_refs, node->local_weak_refs);
break;
}
case BC_ATTEMPT_ACQUIRE:
pr_err("BC_ATTEMPT_ACQUIRE not supported\n");
return -EINVAL;
case BC_ACQUIRE_RESULT:
pr_err("BC_ACQUIRE_RESULT not supported\n");
return -EINVAL;
case BC_FREE_BUFFER: {
binder_uintptr_t data_ptr;
struct binder_buffer *buffer;
if (get_user(data_ptr, (binder_uintptr_t __user *)ptr))
return -EFAULT;
ptr += sizeof(binder_uintptr_t);
buffer = binder_buffer_lookup(proc, data_ptr);
if (buffer == NULL) {
binder_user_error("%d:%d BC_FREE_BUFFER u%016llx no match\n",
proc->pid, thread->pid, (u64)data_ptr);
break;
}
if (!buffer->allow_user_free) {
binder_user_error("%d:%d BC_FREE_BUFFER u%016llx matched unreturned buffer\n",
proc->pid, thread->pid, (u64)data_ptr);
break;
}
binder_debug(BINDER_DEBUG_FREE_BUFFER,
"%d:%d BC_FREE_BUFFER u%016llx found buffer %d for %s transaction\n",
proc->pid, thread->pid, (u64)data_ptr,
buffer->debug_id,
buffer->transaction ? "active" : "finished");
if (buffer->transaction) {
buffer->transaction->buffer = NULL;
buffer->transaction = NULL;
}
if (buffer->async_transaction && buffer->target_node) {
BUG_ON(!buffer->target_node->has_async_transaction);
if (list_empty(&buffer->target_node->async_todo))
buffer->target_node->has_async_transaction = 0;
else
list_move_tail(buffer->target_node->async_todo.next, &thread->todo);
}
trace_binder_transaction_buffer_release(buffer);
binder_transaction_buffer_release(proc, buffer, NULL);
binder_free_buf(proc, buffer);
break;
}
case BC_TRANSACTION://binder通信!!!!!!!!!!!!!!!
case BC_REPLY: {
struct binder_transaction_data tr;
//从用户态拷贝来binder_transaction_data数据,并传给binder_transaction()函数进行实际的传输.
if (copy_from_user(&tr, ptr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);
binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
break;
}
case BC_REGISTER_LOOPER:
binder_debug(BINDER_DEBUG_THREADS,
"%d:%d BC_REGISTER_LOOPER\n",
proc->pid, thread->pid);
if (thread->looper & BINDER_LOOPER_STATE_ENTERED) {
thread->looper |= BINDER_LOOPER_STATE_INVALID;
binder_user_error("%d:%d ERROR: BC_REGISTER_LOOPER called after BC_ENTER_LOOPER\n",
proc->pid, thread->pid);
} else if (proc->requested_threads == 0) {
thread->looper |= BINDER_LOOPER_STATE_INVALID;
binder_user_error("%d:%d ERROR: BC_REGISTER_LOOPER called without request\n",
proc->pid, thread->pid);
} else {
proc->requested_threads--;
proc->requested_threads_started++;
}
thread->looper |= BINDER_LOOPER_STATE_REGISTERED;
break;
case BC_ENTER_LOOPER: //使Service Manager进入循环。
binder_debug(BINDER_DEBUG_THREADS,
"%d:%d BC_ENTER_LOOPER\n",
proc->pid, thread->pid);
if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {
thread->looper |= BINDER_LOOPER_STATE_INVALID;
binder_user_error("%d:%d ERROR: BC_ENTER_LOOPER called after BC_REGISTER_LOOPER\n",
proc->pid, thread->pid);
}
//通过修改线程状态(其实就是修改某个标志位),表明当前线程进入循环状态了?记住的了!
thread->looper |= BINDER_LOOPER_STATE_ENTERED;/
break;
case BC_EXIT_LOOPER:
binder_debug(BINDER_DEBUG_THREADS,
"%d:%d BC_EXIT_LOOPER\n",
proc->pid, thread->pid);
thread->looper |= BINDER_LOOPER_STATE_EXITED;
break;
case BC_REQUEST_DEATH_NOTIFICATION:
case BC_CLEAR_DEATH_NOTIFICATION: {
uint32_t target;
binder_uintptr_t cookie;
struct binder_ref *ref;
struct binder_ref_death *death;
if (get_user(target, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
if (get_user(cookie, (binder_uintptr_t __user *)ptr))
return -EFAULT;
ptr += sizeof(binder_uintptr_t);
ref = binder_get_ref(proc, target);
if (ref == NULL) {
binder_user_error("%d:%d %s invalid ref %d\n",
proc->pid, thread->pid,
cmd == BC_REQUEST_DEATH_NOTIFICATION ?
"BC_REQUEST_DEATH_NOTIFICATION" :
"BC_CLEAR_DEATH_NOTIFICATION",
target);
break;
}
binder_debug(BINDER_DEBUG_DEATH_NOTIFICATION,
"%d:%d %s %016llx ref %d desc %d s %d w %d for node %d\n",
proc->pid, thread->pid,
cmd == BC_REQUEST_DEATH_NOTIFICATION ?
"BC_REQUEST_DEATH_NOTIFICATION" :
"BC_CLEAR_DEATH_NOTIFICATION",
(u64)cookie, ref->debug_id, ref->desc,
ref->strong, ref->weak, ref->node->debug_id);
if (cmd == BC_REQUEST_DEATH_NOTIFICATION) {
if (ref->death) {
binder_user_error("%d:%d BC_REQUEST_DEATH_NOTIFICATION death notification already set\n",
proc->pid, thread->pid);
break;
}
death = kzalloc(sizeof(*death), GFP_KERNEL);
if (death == NULL) {
thread->return_error = BR_ERROR;
binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
"%d:%d BC_REQUEST_DEATH_NOTIFICATION failed\n",
proc->pid, thread->pid);
break;
}
binder_stats_created(BINDER_STAT_DEATH);
INIT_LIST_HEAD(&death->work.entry);
death->cookie = cookie;
ref->death = death;
if (ref->node->proc == NULL) {
ref->death->work.type = BINDER_WORK_DEAD_BINDER;
if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) {
list_add_tail(&ref->death->work.entry, &thread->todo);
} else {
list_add_tail(&ref->death->work.entry, &proc->todo);
wake_up_interruptible(&proc->wait);
}
}
} else {
if (ref->death == NULL) {
binder_user_error("%d:%d BC_CLEAR_DEATH_NOTIFICATION death notification not active\n",
proc->pid, thread->pid);
break;
}
death = ref->death;
if (death->cookie != cookie) {
proc->pid, thread->pid,
(u64)death->cookie,
(u64)cookie);
break;
}
ref->death = NULL;
if (list_empty(&death->work.entry)) {
death->work.type = BINDER_WORK_CLEAR_DEATH_NOTIFICATION;
if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) {
list_add_tail(&death->work.entry, &thread->todo);
} else {
list_add_tail(&death->work.entry, &proc->todo);
wake_up_interruptible(&proc->wait);
}
} else {
BUG_ON(death->work.type != BINDER_WORK_DEAD_BINDER);
death->work.type = BINDER_WORK_DEAD_BINDER_AND_CLEAR;
}
}
} break;
case BC_DEAD_BINDER_DONE: {
struct binder_work *w;
binder_uintptr_t cookie;
struct binder_ref_death *death = NULL;
if (get_user(cookie, (binder_uintptr_t __user *)ptr))
return -EFAULT;
ptr += sizeof(void *);
list_for_each_entry(w, &proc->delivered_death, entry) {
struct binder_ref_death *tmp_death = container_of(w, struct binder_ref_death, work);
if (tmp_death->cookie == cookie) {
death = tmp_death;
break;
}
}
binder_debug(BINDER_DEBUG_DEAD_BINDER,
"%d:%d BC_DEAD_BINDER_DONE %016llx found %p\n",
proc->pid, thread->pid, (u64)cookie,
death);
if (death == NULL) {
binder_user_error("%d:%d BC_DEAD_BINDER_DONE %016llx not found\n",
proc->pid, thread->pid, (u64)cookie);
break;
}
list_del_init(&death->work.entry);
if (death->work.type == BINDER_WORK_DEAD_BINDER_AND_CLEAR) {
death->work.type = BINDER_WORK_CLEAR_DEATH_NOTIFICATION;
if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) {
list_add_tail(&death->work.entry, &thread->todo);
} else {
list_add_tail(&death->work.entry, &proc->todo);
wake_up_interruptible(&proc->wait);
}
}
} break;
default:
pr_err("%d:%d unknown command %d\n",
proc->pid, thread->pid, cmd);
return -EINVAL;
}
*consumed = ptr - buffer;
}
return 0;
}
函数的内容相当的长,由于有各种各样的情况嘛。此处最重要的是
BC_TRANSACTION节点,一般的binder通信会走入到这个分支.这个
分支内部的函数我们一会再看,接下来我们先把
binder_thread_read贴出来.
如果没有工作需要做,binder_thread_read()函数就进入睡眠或返回,否则binder_thread_read()函数会
从todo队列摘下了一个节点,并把节
点里的数据整理成一个binder_transaction_data结构,然后通过copy_to_user()
把该结构传到用户态.
binder_thread_read会
尝试调用wait_event_
in
ter
ruptible
或wait_event_interruptible_exclusive()来等待待处理
的工作。wait_event_interruptible()是个宏定义,和wait_event()类似,不同之处 在于前者不但会判
断“苏醒条件”
,还会判断当前进程是否带有挂起的系统信号,当“苏醒条件”满足时(比如binder_has_thread_work(thread)返回非
0值),或者有挂起的系统
信号时,表示进程有工作要做了,此时wait_event_interruptible()将跳出内部的for循环。
如果的确不满足跳出条件的话,
wait_event_interruptible()会进入挂
起状态。
// 当目标进程被唤醒时(反正这个read缓冲区的有数据了),会接着执行自己的binder_thread_read(),尝试解析并执行那些刚收来的工作
//无论收来的工作来自于“binder_proc的todo链表”,还是来自于某“binder_thread的todo链表”,
//现在要开始从todo链表中摘节点了,而且在完成工作之后,会彻底删除binder_transaction节点。
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
{
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
int ret = 0;
int wait_for_proc_work;
//service manager 时 写入一个值BR_NOOP到参数ptr指向的缓冲区中去,即用户传进来的bwr.read_buffer缓冲区
if (*consumed == 0) {
if (put_user(BR_NOOP, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
}
retry:
//优先考虑thread节点的todo链表中有没有工作需要完成.
//当前线程没有事务需要处理,wait_for_proc_work为true,表示要去查看proc是否有未处理的事务
wait_for_proc_work = thread->transaction_stack == NULL &&
list_empty(&thread->todo);
//当前thread->return_error == BR_OK,这是前面创建binder_thread时初始化设置的
if (thread->return_error != BR_OK && ptr < end) {
if (thread->return_error2 != BR_OK) {
if (put_user(thread->return_error2, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
binder_stat_br(proc, thread, thread->return_error2);
if (ptr == end)
goto done;
thread->return_error2 = BR_OK;
}
if (put_user(thread->return_error, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
binder_stat_br(proc, thread, thread->return_error);
thread->return_error = BR_OK;
goto done;
}
thread->looper |= BINDER_LOOPER_STATE_WAITING;//表示线程处于等待状态,也只是设置了一个标志位啊.
if (wait_for_proc_work)
proc->ready_threads++;
binder_unlock(__func__);
trace_binder_wait_for_work(wait_for_proc_work,
!!thread->transaction_stack,
!list_empty(&thread->todo));
if (wait_for_proc_work) {
//......
//设置当前线程的优先级别,这是因为thread要去处理属于proc的事务,因此要将此thread的优先级别设置和proc一样
binder_set_nice(proc->default_priority);
if (non_block) {
if (!binder_has_proc_work(proc, thread))//binder_has_proc_work表示proc有没有事件处理.
ret = -EAGAIN;
} else
//调用相关函数进入休眠状态,等待请求到来再唤醒了ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
} else {
if (non_block) {
if (!binder_has_thread_work(thread))//binder_has_thread_work表示当前线程中是否有事情要去做?
ret = -EAGAIN;
} else
//阻塞模式进入休眠状态,等待请求到来再唤醒了
ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
}
binder_lock(__func__);
if (wait_for_proc_work)
proc->ready_threads--;
thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
// 如果是非阻塞的情况,ret值非0表示出了问题,所以return。
// 如果是阻塞(non_block)情况,ret值非0表示等到的结果出了问题,所以也return。
if (ret)
return ret;
//此时service manage被唤醒后继续往下执行,
while (1) {
uint32_t cmd;
struct binder_transaction_data tr;
struct binder_work *w;
struct binder_transaction *t = NULL;
//wait_for_proc_work等于1,并且proc->todo不为空,所以从proc->todo列表中得到第一个工作项:
// 读取binder_thread或binder_proc中todo列表的第一个节点
if (!list_empty(&thread->todo))
w = list_first_entry(&thread->todo, struct binder_work,
entry);
} else if (!list_empty(&proc->todo) && wait_for_proc_work) {
w = list_first_entry(&proc->todo, struct binder_work,
entry);
} else {
/* no data added */
if (ptr - buffer == 4 &&
!(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN))
goto retry;
break;
}
if (end - ptr < sizeof(tr) + 4)
break;
switch (w->type) {
case BINDER_WORK_TRANSACTION: {//从上面的描述中,我们知道,这个工作项的类型为BINDER_WORK_TRANSACTION,于是通过下面语句得到事 务项.
t = container_of(w, struct binder_transaction, work);
} break;
case BINDER_WORK_TRANSACTION_COMPLETE: {
cmd = BR_TRANSACTION_COMPLETE;
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
binder_stat_br(proc, thread, cmd);
binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE,
"%d:%d BR_TRANSACTION_COMPLETE\n",
proc->pid, thread->pid);
// 将binder_transaction节点从todo队列摘下来
list_del(&w->entry);
kfree(w);
binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
} break;
case BINDER_WORK_NODE: {
struct binder_node *node = container_of(w, struct binder_node, work);
uint32_t cmd = BR_NOOP;
const char *cmd_name;
int strong = node->internal_strong_refs || node->local_strong_refs;
int weak = !hlist_empty(&node->refs) || node->local_weak_refs || strong;
if (weak && !node->has_weak_ref) {
cmd = BR_INCREFS;
cmd_name = "BR_INCREFS";
node->has_weak_ref = 1;
node->pending_weak_ref = 1;
node->local_weak_refs++;
} else if (strong && !node->has_strong_ref) {
cmd = BR_ACQUIRE;
cmd_name = "BR_ACQUIRE";
node->has_strong_ref = 1;
node->pending_strong_ref = 1;
node->local_strong_refs++;
} else if (!strong && node->has_strong_ref) {
cmd = BR_RELEASE;
cmd_name = "BR_RELEASE";
node->has_strong_ref = 0;
} else if (!weak && node->has_weak_ref) {
cmd = BR_DECREFS;
cmd_name = "BR_DECREFS";
node->has_weak_ref = 0;
}
if (cmd != BR_NOOP) {
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
if (put_user(node->ptr,
(binder_uintptr_t __user *)ptr))
return -EFAULT;
ptr += sizeof(binder_uintptr_t);
if (put_user(node->cookie,
(binder_uintptr_t __user *)ptr))
return -EFAULT;
ptr += sizeof(binder_uintptr_t);
binder_stat_br(proc, thread, cmd);
binder_debug(BINDER_DEBUG_USER_REFS,
"%d:%d %s %d u%016llx c%016llx\n",
proc->pid, thread->pid, cmd_name,
node->debug_id,
(u64)node->ptr, (u64)node->cookie);
} else {
list_del_init(&w->entry);
if (!weak && !strong) {
binder_debug(BINDER_DEBUG_INTERNAL_REFS,
"%d:%d node %d u%016llx c%016llx deleted\n",
proc->pid, thread->pid,
node->debug_id,
(u64)node->ptr,
(u64)node->cookie);
rb_erase(&node->rb_node, &proc->nodes);
kfree(node);
binder_stats_deleted(BINDER_STAT_NODE);
} else {
binder_debug(BINDER_DEBUG_INTERNAL_REFS,
"%d:%d node %d u%016llx c%016llx state unchanged\n",
proc->pid, thread->pid,
node->debug_id,
(u64)node->ptr,
(u64)node->cookie);
}
}
} break;
case BINDER_WORK_DEAD_BINDER:
case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
struct binder_ref_death *death;
uint32_t cmd;
death = container_of(w, struct binder_ref_death, work);
if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION)
cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE;
else
cmd = BR_DEAD_BINDER;
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
if (put_user(death->cookie,
(binder_uintptr_t __user *)ptr))
return -EFAULT;
ptr += sizeof(binder_uintptr_t);
binder_stat_br(proc, thread, cmd);
binder_debug(BINDER_DEBUG_DEATH_NOTIFICATION,
"%d:%d %s %016llx\n",
proc->pid, thread->pid,
cmd == BR_DEAD_BINDER ?
"BR_DEAD_BINDER" :
"BR_CLEAR_DEATH_NOTIFICATION_DONE",
(u64)death->cookie);
if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) {
list_del(&w->entry);
kfree(death);
binder_stats_deleted(BINDER_STAT_DEATH);
} else
list_move(&w->entry, &proc->delivered_death);
if (cmd == BR_DEAD_BINDER)
goto done; /* DEAD_BINDER notifications can cause transactions */
} break;
}
if (!t)
continue;
BUG_ON(t->buffer == NULL);
// 接着就是把事务项t中的数据拷贝到本地局部变量struct binder_transaction_data tr中去了:(内核到用户空间)
if (t->buffer->target_node) {
struct binder_node *target_node = t->buffer->target_node;
tr.target.ptr = target_node->ptr;
// 用目标binder_node中记录的cookie值给binder_transaction_data的cookie域赋值,
// 这个值就是目标binder实体的地址!!!!!
tr.cookie = target_node->cookie;
t->saved_priority = task_nice(current);
if (t->priority < target_node->min_priority &&
!(t->flags & TF_ONE_WAY))
binder_set_nice(t->priority);
else if (!(t->flags & TF_ONE_WAY) ||
t->saved_priority > target_node->min_priority)
binder_set_nice(target_node->min_priority);
cmd = BR_TRANSACTION;
} else {
tr.target.ptr = 0;
tr.cookie = 0;
cmd = BR_REPLY;
}
tr.code = t->code;
tr.flags = t->flags;
tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);
if (t->from) {
struct task_struct *sender = t->from->proc->tsk;
tr.sender_pid = task_tgid_nr_ns(sender,
task_active_pid_ns(current));
} else {
tr.sender_pid = 0;
}
tr.data_size = t->buffer->data_size;
tr.offsets_size = t->buffer->offsets_size;
// binder_transaction_data中的data只是记录了binder缓冲区中的地址信息,并再做copy动作
//t->buffer->data所指向的地址是内核空间的,现在要把数据返回给Service Manager进程的用户空间,而 //Service Manager进程的用户空间是不能访问内核空间的数据的,所以这里要作一下处理。怎么处理呢? //我们在学面向对象语言的时候,对象的拷贝有深拷贝和浅拷贝之分,深拷贝是把另外分配一块新内存,然 //后把原始对象的内容搬过去,浅拷贝是并没有为新对象分配一块新空间,而只是分配一个引用,而个引用 //指向原始对象。Binder机制用的是类似浅拷贝的方法,通过在用户空间分配一个虚拟地址,然后让这个用 //户空间虚拟地址与 t->buffer->data这个内核空间虚拟地址指向同一个物理地址,这样就可以实现浅拷贝
//这里只要将t->buffer->data加上一个偏移值proc->user_buffer_offset就可以得到t->buffer->data对应 //的用户空间虚拟地址了。调整了tr.data.ptr.buffer的值之后,不要忘记也要一起调整tr.data.ptr.offsets的值.
tr.data.ptr.buffer = (binder_uintptr_t)(
( uintptr_t ) t -> buffer -> data + proc -> user_buffer_offset );tr.data.ptr.offsets = tr.data.ptr.buffer +
ALIGN ( t -> buffer -> data_size , sizeof ( void *));//// 将cmd命令和tr的内容写入用户态,此时应该是BR_TRANSACTION
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
//// 当然,binder_transaction_data本身也是要copy到用户态的
if (copy_to_user(ptr, &tr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);
trace_binder_transaction_received(t);
binder_stat_br(proc, thread, cmd);
//......
//最后,由于已经处理了这个事务,要把它从todo列表中删除
list_del(&t->work.entry);
t->buffer->allow_user_free = 1;
if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
t->to_parent = thread->transaction_stack;
t->to_thread = thread;
thread->transaction_stack = t;
} else {
t->buffer->transaction = NULL;
// TF_ONE_WAY情况,此时会删除binder_transaction节点
kfree(t);
binder_stats_deleted(BINDER_STAT_TRANSACTION);
}
break;//跳出循环.
}
done:
*consumed = ptr - buffer;
if (proc->requested_threads + proc->ready_threads == 0 &&
proc->requested_threads_started < proc->max_threads &&
(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to */
/*spawn a new thread if we leave this out */) {
proc->requested_threads++;
binder_debug(BINDER_DEBUG_THREADS,
"%d:%d BR_SPAWN_LOOPER\n",
proc->pid, thread->pid);
if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))
return -EFAULT;
binder_stat_br(proc, thread, BR_SPAWN_LOOPER);
}
return 0;
}
下面我们再来看一下
binder_thread_write函数中的一个很重要的分支
BC_TRANSACTION,它是调用了binder_transaction(proc, thread,
&tr, cmd == BC_REPLY);这个方法进行进一步的解析binder_transaction_data操作,这个方法可以理解为一次通信,(一次通信事务)
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
//这里传进来的参数reply试看看cmd是不是REPLY来决定的,当是REPLY的时候表示是回复用户的通信.
struct binder_transaction *t;
struct binder_work *tcomplete;
binder_size_t *offp, *off_end;
binder_size_t off_min;
struct binder_proc *target_proc;
struct binder_thread *target_thread = NULL;
struct binder_node *target_node = NULL;
struct list_head *target_list;
wait_queue_head_t *target_wait;
struct binder_transaction *in_reply_to = NULL;
struct binder_transaction_log_entry *e;
uint32_t return_error;
e = binder_transaction_log_add(&binder_transaction_log);
e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);
e->from_proc = proc->pid;
e->from_thread = thread->pid;
e->target_handle = tr->target.handle;
e->data_size = tr->data_size;
e->offsets_size = tr->offsets_size;
if (reply) {//如果是驱动回复用户空间
in_reply_to = thread->transaction_stack;
//......
binder_set_nice(in_reply_to->saved_priority);
if (in_reply_to->to_thread != thread) {
//......
}
thread->transaction_stack = in_reply_to->to_parent;
target_thread = in_reply_to->from;
if (target_thread == NULL) {
return_error = BR_DEAD_REPLY;
goto err_dead_binder;
}
if (target_thread->transaction_stack != in_reply_to) {
//......
}
target_proc = target_thread->proc;
} else {
//当不是回复的,而是比如client的客户端请求的时候.
//先从tr->target.handle句柄值(也是在用户空间封装的时候写入的),找到对应的binder_ref节点,及binder_node节点
//这个handle最开始初始化是在BpBinder实例化的时候它的Handle.
if (tr->target.handle) {
struct binder_ref *ref;
//在引用树种找有没有这个handle对应的binder引用.
ref = binder_get_ref(proc, tr->target.handle);
if (ref == NULL) {
//......
}
target_node = ref->node;//通过引用树对应的节点找到目标target_node实体.!!!!!
} else {// 如果句柄值为0(不是0即为true,在C语言中),则获取特殊的binder_context_mgr_node节点,
// 即Service Manager Service对应的节点
target_node = binder_context_mgr_node;//binder_context_mgr_node它是静态的.
if (target_node == NULL) {
return_error = BR_DEAD_REPLY;
goto err_no_context_mgr_node;
}
}
e->to_node = target_node->debug_id;
target_proc = target_node->proc;//目标实体的进程.
//......
if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
struct binder_transaction *tmp;
tmp = thread->transaction_stack;
if (tmp->to_thread != thread) {
//......
}
while (tmp) {
if (tmp->from && tmp->from->proc == target_proc)
target_thread = tmp->from;
tmp = tmp->from_parent;
}
}
}
// 对于带TF_ONE_WAY标记的BC_TRANSACTION来说(不需要回复的),此时target_thread为NULL,
//所以准备向binder_proc的todo中加节点(不往指定的线程todo队列中加)
if (target_thread) {
e->to_thread = target_thread->pid;
target_list = &target_thread->todo;
target_wait = &target_thread->wait;
} else {
//target_thread为false的时候.
target_list = &target_proc->todo;
target_wait = &target_proc->wait;
}
e->to_proc = target_proc->pid;
/* TODO: reuse incoming transaction for reply */
//分配了一个待处理事务t和一个待完成工作项tcomplete,并执行初始化工作
//这里的事务t是要交给target_proc处理的,
t = kzalloc(sizeof(*t), GFP_KERNEL);//struct binder_transaction , 创建新的binder_transaction节点。
//......
binder_stats_created(BINDER_STAT_TRANSACTION);
tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
//......
binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);
//......
if (reply)
//......
else
//......
if (!reply && !(tr->flags & TF_ONE_WAY))
t->from = thread;
else
t->from = NULL;
t->sender_euid = task_euid(proc->tsk);
t->to_proc = target_proc;
t->to_thread = target_thread;
t->code = tr->code;
t->flags = tr->flags;
t->priority = task_nice(current);
trace_binder_transaction(reply, t, target_node);
//在target_proc的进程空间中分配一块内存来保存用户传进入的参数了.
t->buffer = binder_alloc_buf(target_proc, tr->data_size,//zy binder_transaction
tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
if (t->buffer == NULL) {
//......
}
t->buffer->allow_user_free = 0;
t->buffer->debug_id = t->debug_id;
t->buffer->transaction = t;
t->buffer->target_node = target_node;
trace_binder_transaction_alloc_buf(t->buffer);
if (target_node)
binder_inc_node(target_node, 1, 0, NULL);
// 下面的代码分析所传数据中的所有binder对象,如果是binder实体的话,要在红黑树中添加相应的节点。
// 首先,从用户态获取所传输的数据,以及数据里的binder对象的偏移信息
offp = (binder_size_t *)(t->buffer->data +
ALIGN(tr->data_size, sizeof(void *)));
if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
tr->data.ptr.buffer, tr->data_size)) {
//......
}
if (copy_from_user(offp, (const void __user *)(uintptr_t)
tr->data.ptr.offsets, tr->offsets_size)) {
//......
}
if (!IS_ALIGNED(tr->offsets_size, sizeof(binder_size_t))) {
//......
}
off_end = (void *)offp + tr->offsets_size;
off_min = 0;
// 遍历传输过程中的每个flat_binder_object信息,创建必要的红黑树节点,并添加相应的进去.
for (; offp < off_end; offp++) {
struct flat_binder_object *fp;
if (*offp > t->buffer->data_size - sizeof(*fp) ||
*offp < off_min ||
t->buffer->data_size < sizeof(*fp) ||
!IS_ALIGNED(*offp, sizeof(u32))) {
//......
}
fp = (struct flat_binder_object *)(t->buffer->data + *offp);
off_min = *offp + sizeof(struct flat_binder_object);
switch (fp->type) {//对是在writeStrongBinder中传入的.
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
struct binder_ref *ref;
//由于是第一次在Binder驱动程序中传输这个binder实体的时候.
//调用binder_get_node函数查询这个Binder实体时,会返回空,
//于是binder_new_node在proc中新建一个,下次就可以直接使用了.
struct binder_node *node = binder_get_node(proc, fp->binder);
if (node == NULL) {
node = binder_new_node(proc, fp->binder, fp->cookie);
if (node == NULL) {
return_error = BR_FAILED_REPLY;
goto err_binder_new_node_failed;
}
node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
}
if (fp->cookie != node->cookie) {
//......
}
//现在,由于要把这个Binder实体交给target_proc来进行管理.
//于是通过binder_get_ref_for_node为Binder实体
创建一个引用,加入到引用树.//并且通过binder_inc_ref来增加这个引用计数,防止这个引用还在使用过程当中就被销毁// 必要时,会在目标进程的binder_proc中创建对应的binder_ref红黑树节点
ref = binder_get_ref_for_node(target_proc, node);
if (ref == NULL) {
return_error = BR_FAILED_REPLY;
goto err_binder_get_ref_for_node_failed;
}
if (fp->type == BINDER_TYPE_BINDER)
fp->type = BINDER_TYPE_HANDLE;
else
fp->type = BINDER_TYPE_WEAK_HANDLE;
// 修改所传数据中的flat_binder_object信息,因为远端的binder实体到了目标
// 端,就变为binder代理了,所以要记录下binder句柄了.
fp->handle = ref->desc;
binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
&thread->todo);
trace_binder_transaction_node_to_ref(t, node, ref);
//......
} break;
case BINDER_TYPE_HANDLE: //当是引用的时候.
case BINDER_TYPE_WEAK_HANDLE: {
struct binder_ref *ref = binder_get_ref(proc, fp->handle);
if (ref == NULL) {
//......
}
if (ref->node->proc == target_proc) {
if (fp->type == BINDER_TYPE_HANDLE)
fp->type = BINDER_TYPE_BINDER;
else
fp->type = BINDER_TYPE_WEAK_BINDER;
fp->binder = ref->node->ptr;
fp->cookie = ref->node->cookie;
binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL);
trace_binder_transaction_ref_to_node(t, ref);
//......
} else {
struct binder_ref *new_ref;
new_ref = binder_get_ref_for_node(target_proc, ref->node);
//......
fp->handle = new_ref->desc;
binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);
trace_binder_transaction_ref_to_ref(t, ref,
new_ref);
//......
}
} break;
case BINDER_TYPE_FD: {
//......
} break;
default:
//......
}
}
if (reply) {
BUG_ON(t->buffer->async_transaction != 0);
binder_pop_transaction(target_thread, in_reply_to);
} else if (!(t->flags & TF_ONE_WAY)) {
BUG_ON(t->buffer->async_transaction != 0);
t->need_reply = 1;
t->from_parent = thread->transaction_stack;
thread->transaction_stack = t;
} else {
BUG_ON(target_node == NULL);
BUG_ON(t->buffer->async_transaction != 1);
if (target_node->has_async_transaction) {
target_list = &target_node->async_todo;
target_wait = NULL;
} else
target_node->has_async_transaction = 1;
}
t->work.type = BINDER_WORK_TRANSACTION;
// 终于把binder_transaction节点插入target_list(即目标todo队列)了。
list_add_tail(&t->work.entry, target_list);//最后把待处理事务加入到target_list列表中去:
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
list_add_tail(&tcomplete->entry, &thread->todo);//并且把待完成工作项加入到本线程的todo等待执行列表中去
if (target_wait)
// 传输动作完毕,现在可以唤醒系统中其他相关线程了,wake up!
wake_up_interruptible(target_wait);//现在目标进程有事情可做了,于是唤醒它.
return;
//......}
9、binder_io :此结构体是在service manager中使用的,在framework/native/cmds/servicemanager/binder.h中定义.
主要用于service manager读取到驱动传过来的数据以后,对binder_transaction_data进行在一次的封装.
struct binder_io
{
char *data; /* pointer to read/write from *///真正传输的数据.binder_transaction_data->data.ptr.buffer
binder_size_t *offs; /* array of offsets *///数据的偏移量
binder_transaction_data-> data . ptr . offsetssize_t data_avail; /* bytes available in data buffer */
size_t offs_avail; /* entries available in offsets array */
char *data0; /* start of data buffer */
binder_size_t *offs0; /* start of offsets buffer */
uint32_t flags;
uint32_t unused;
};
10、
svcinfo :此结构体描述一个服务代理信息(注册到service manager中的),它是一个单向链表的结构
struct svcinfo *svclist = NULL;//记录着所有添加进系统的"service代理"信息(头结点)
struct svcinfo //服务信息
{
struct svcinfo *next;//通过这个next来连接下一个.
uint32_t handle; //系统service对应的binder句柄值!
struct binder_death death;
int allow_isolated;
size_t len;
uint16_t name[0];//!
};
和它对应的就有一个查找的方法find_svc,
当应用调用getService()获取系统服务的代理接口时,
SMS就会搜索这张“服务向量表”,查
找
是否
有节点能和用
户传来的服务名匹配,
如果能查到,就返回对应的sp<IBinder>,这个接口在远端对应的实体就是"目标Service实体".
struct svcinfo *find_svc(const uint16_t *s16, size_t len)//查找通过名字
{
struct svcinfo *si;
for (si = svclist; si; si = si->next) {
if ((len == si->len) &&
!memcmp(s16, si->name, len * sizeof(uint16_t))) {//
memcmp对比内容.返回值等于0的时候证明相等.return si;
}
}
return NULL;
}
11、struct binder_work :binder驱动中进程要处理的工作项.
struct binder_work {
struct list_head entry;//用来了连入一些数据集合,如连入binder_thread的todo列表集合当中.
enum {
BINDER_WORK_TRANSACTION = 1,
BINDER_WORK_TRANSACTION_COMPLETE,
BINDER_WORK_NODE,
BINDER_WORK_DEAD_BINDER,
BINDER_WORK_DEAD_BINDER_AND_CLEAR,
BINDER_WORK_CLEAR_DEATH_NOTIFICATION,
} type;//当前处理的事务的类型,也就是说正处在某个阶段.
};
12、struct binder_transaction :client端和server端通信时,用来记录通信信息,表示一次通信.
struct binder_transaction {
int debug_id;
struct binder_work work;//当前在进行什么类型的工作项的处理
struct binder_thread *from; //来自于哪一个binder线程
struct binder_transaction *from_parent;//来自于哪一个binder事务
struct binder_proc *to_proc;//要发往那个进程
struct binder_thread *to_thread;//要发往那个进程的那一条线程.
struct binder_transaction *to_parent;
unsigned need_reply:1;
/* unsigned is_dead:1; */ /* not used at the moment */
struct binder_buffer *buffer; //携带的数据.
unsigned int code;
unsigned int flags;
long priority;
long saved_priority;
kuid_t sender_euid;
};