前面文章只是对binder的几个核心部分做了简要介绍,本文将按照启动顺序或调用顺序对binder作详细的介绍。
一、binder驱动初始化
binder驱动作为内核模块,首先被加载进来,做初始化操作,首先binder驱动把自己注册成为misc设备,文件目录会创建/dev/binder节点,然后提供了操作binder设备方法,这里初始化调用是通过binder_init函数调用的。
static int __init binder_init(void)
{
...
// 注册misc设备
ret = misc_register(&binder_miscdev);
...
return ret;
}
static struct miscdevice binder_miscdev = {
.minor = MISC_DYNAMIC_MINOR, //次设备号 动态分配
.name = "binder", //设备名
.fops = &binder_fops //设备的文件操作结构,这是file_operations结构
};
static const struct file_operations binder_fops = {
.owner = THIS_MODULE,
.poll = binder_poll,
.unlocked_ioctl = binder_ioctl,
.compat_ioctl = binder_ioctl,
.mmap = binder_mmap,
.open = binder_open,
.flush = binder_flush,
.release = binder_release,
};
上面提供了一些方法,主要介绍核心的方法,binder_open、binder_mmap、binder_ioctl方法,这些都是用户空间通过系统调用来调用的。
二、Service Manager启动流程
/system/core/rootdir/init.rc
/framework/native/cmds/servicemanager/service_manager.c
/framework/native/cmds/servicemanager/binder.c
在init.rc中service manager是作为服务启动的
service servicemanager /system/bin/servicemanager//启动服务
class core
user system
group system
critical
onrestart restart healthd
onrestart restart zygote
onrestart restart media
onrestart restart surfaceflinger
onrestart restart drm
启动服务的时候,接着会调用service_manager.c的main函数
int main(int argc, char **argv)
{
struct binder_state *bs;
bs = binder_open(128*1024);//128k空间
if (!bs) {
ALOGE("failed to open binder driver\n");
return -1;
}
if (binder_become_context_manager(bs)) {//申请成为上下文管理者
ALOGE("cannot become context manager (%s)\n", strerror(errno));
return -1;
}
selinux_enabled = is_selinux_enabled();
sehandle = selinux_android_service_context_handle();
selinux_status_open(true);
if (selinux_enabled > 0) {
if (sehandle == NULL) {
ALOGE("SELinux: Failed to acquire sehandle. Aborting.\n");
abort();
}
if (getcon(&service_manager_context) != 0) {
ALOGE("SELinux: Failed to acquire service_manager context. Aborting.\n");
abort();
}
}
union selinux_callback cb;
cb.func_audit = audit_callback;
selinux_set_callback(SELINUX_CB_AUDIT, cb);
cb.func_log = selinux_log_callback;
selinux_set_callback(SELINUX_CB_LOG, cb);
binder_loop(bs, svcmgr_handler);//进入无限循环,从binder驱动获取数据,并解析处理
return 0;
}
上述主要涉及三个核心方法,binder_open、binder_become_context_manager、 binder_loop一个一个来看
1.1 binder_open函数,在binder.c文件里面(Service Manager自己实现了直接和binder驱动交互)
struct binder_state *binder_open(size_t mapsize)
{
struct binder_state *bs;
struct binder_version vers;
bs = malloc(sizeof(*bs));
if (!bs) {
errno = ENOMEM;
return NULL;
}
bs->fd = open("/dev/binder", O_RDWR);//打开设备,返回文件描述符
if (bs->fd < 0) {
fprintf(stderr,"binder: cannot open device (%s)\n",
strerror(errno));
goto fail_open;
}
//检查版本号
if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) ||
(vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) {
fprintf(stderr,
"binder: kernel driver version (%d) differs from user space version (%d)\n",
vers.protocol_version, BINDER_CURRENT_PROTOCOL_VERSION);
goto fail_open;
}
bs->mapsize = mapsize;
bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);//申请内存
if (bs->mapped == MAP_FAILED) {
fprintf(stderr,"binder: cannot map device (%s)\n",
strerror(errno));
goto fail_map;
}
return bs;
fail_map:
close(bs->fd);
fail_open:
free(bs);
return NULL;
}
首先通过open操作打开binder设备,返回文件描述符,通过ioctl获取binder版本号,接着调用mmap申请内存空间,这些都是通过系统调用实现,最终都会调用到内核的binder驱动当中,分别对应binder_open、binder_ioctl、binder_mmap,下面就看binder驱动中这三个接口。
1.2 binder
驱动中的
binder_open
binder_open主要作用是为每个进程创建一个binder_proc对象,并把它加入到binder_procs哈希链表当中去。然后会初始化TODO列表和wait队列等等
static int binder_open(struct inode *nodp, struct file *filp)
{
struct binder_proc *proc; // binder进程
proc = kzalloc(sizeof(*proc), GFP_KERNEL); // 为binder_proc结构体在分配kernel内存空间
if (proc == NULL)
return -ENOMEM;
get_task_struct(current);
proc->tsk = current; //将当前线程的task保存到binder进程的tsk
INIT_LIST_HEAD(&proc->todo); //初始化todo列表
init_waitqueue_head(&proc->wait); //初始化wait队列
proc->default_priority = task_nice(current); //将当前进程的nice值转换为进程优先级
binder_lock(__func__); //同步锁,因为binder支持多线程访问
binder_stats_created(BINDER_STAT_PROC); //BINDER_PROC对象创建数加1
hlist_add_head(&proc->proc_node, &binder_procs); //将proc_node节点添加到binder_procs为表头的队列
proc->pid = current->group_leader->pid;
INIT_LIST_HEAD(&proc->delivered_death);
filp->private_data = proc; //file文件指针的private_data变量指向binder_proc数据
binder_unlock(__func__); //释放同步锁
return 0;
}
2.1 mmap申请内存
用户空间通过mmap向binder驱动申请内存,对应的是调用到binder驱动的binder_mmap
mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0)
2.2 binder驱动中的binder_mmap
static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
{
int ret;
struct vm_struct *area;
struct binder_proc *proc = filp->private_data;//binder_open中创建的bind_proc
const char *failure_string;
struct binder_buffer *buffer;
if (proc->tsk != current)
return -EINVAL;
if ((vma->vm_end - vma->vm_start) > SZ_4M)//分配的内存不大于4M
vma->vm_end = vma->vm_start + SZ_4M;
...
vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;
mutex_lock(&binder_mmap_lock);
if (proc->buffer) {
ret = -EBUSY;
failure_string = "already mapped";
goto err_already_mapped;
}
area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);
if (area == NULL) {
ret = -ENOMEM;
failure_string = "get_vm_area";
goto err_get_vm_area_failed;
}
proc->buffer = area->addr;
//地址偏移量 = 用户虚拟地址空间 - 内核虚拟地址空间
proc->user_buffer_offset = vma->vm_start - (uintptr_t)proc->buffer;
mutex_unlock(&binder_mmap_lock);
...
//分配物理页的指针数组,大小等于用户虚拟地址内存/4k
proc->pages = kzalloc(sizeof(proc->pages[0]) * ((vma->vm_end - vma->vm_start) / PAGE_SIZE), GFP_KERNEL);
if (proc->pages == NULL) {
ret = -ENOMEM;
failure_string = "alloc page array";
goto err_alloc_pages_failed;
}
proc->buffer_size = vma->vm_end - vma->vm_start;
vma->vm_ops = &binder_vm_ops;
vma->vm_private_data = proc;
/* binder_update_page_range assumes preemption is disabled */
preempt_disable();
//分配物理空间(初始只分配一个页),同时映射到内核空间和进程空间(这里就是一次拷贝的秘密)
ret = binder_update_page_range(proc, 1, proc->buffer, proc->buffer + PAGE_SIZE, vma);
preempt_enable_no_resched();
if (ret) {
ret = -ENOMEM;
failure_string = "alloc small buf";
goto err_alloc_small_buf_failed;
}
buffer = proc->buffer;//binder_buffer指向proc->buffer
INIT_LIST_HEAD(&proc->buffers);//创建buffers链表头
list_add(&buffer->entry, &proc->buffers);//加入到buffers链表
buffer->free = 1;
binder_insert_free_buffer(proc, buffer);//将空闲buffer加入到proc->buffers列表当中
proc->free_async_space = proc->buffer_size / 2;//异步可用空间为总空间的一半
barrier();
proc->files = get_files_struct(current);
proc->vma = vma;
proc->vma_vm_mm = vma->vm_mm;
/*pr_info("binder_mmap: %d %lx-%lx maps %pK\n",
proc->pid, vma->vm_start, vma->vm_end, proc->buffer);*/
return 0;
...
return ret;
}
上述代码申请一个和用户空间大小一致的内核虚拟空间(不超过4M,用户进程默认是1M-8K,Service Manager是128k),然后申请分配物理空间,初始只分配有一个页,并映射到用户虚拟空间和内核虚拟空间,用户虚拟空间和内核虚拟空间之间的偏移量会在binder_proc->user_buffer_offset中保存,这样就可以实现buffer的同步。上述有一个buffers链表和一个空闲buffers链表,最开始分配时没有使用,故也会加入到空闲buffer链表当中,binder驱动的内存分配方式是采用最佳匹配算法,也就是数据传输会从空闲链表中分配最佳的内存来保存数据,用完再加入到空闲链表中,分配内存的时候,如果内存不足,则会重新申请物理内存,否则使用最小大于要分配的数据的空闲内存。binder驱动数据的内核空间和用户空间的数据拷贝,沿用了linux系统的copy_from_user和copy_to_user函数,该函数会检查是否分配有物理内存,同时也会检查分配的虚拟内存是否在进程的地址范围内。binder驱动对于数据的拷贝,会从发送方拷贝到内核虚拟空间,而该内核虚拟空间和接收方用户虚拟空间共用物理空间,因此实现了数据的一次拷贝,当接收方回数据给发送方时,发送方就变成了接收方,接收方就变成了发送方。
3.1 检查版本号和binder_loop
检查版本号通过ioctl(bs->fd, BINDER_VERSION, &vers)从binder驱动中获取,发送的是BINDER_VERSION命令
binder_become_context_manager申请成为上下文管理者,发送的是BINDER_SET_CONTEXT_MGR命令
int binder_become_context_manager(struct binder_state *bs)
{
return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
再来看看binder_loop
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
uint32_t readbuf[32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
binder_write(bs, readbuf, sizeof(uint32_t));
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (uintptr_t) readbuf;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res < 0) {
ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
break;
}
res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
if (res == 0) {
ALOGE("binder_loop: unexpected reply?!\n");
break;
}
if (res < 0) {
ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
break;
}
}
}
上面一个无限循环,通过ioctl不断的从binder驱动中获取数据,发送的命令是BINDER_WRITE_READ,真正对于数据的处理书service_manager.c 的svcmgr_handler,后面再讲,这里来看看binder驱动中的binder_ioctl函数。
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
/*pr_info("binder_ioctl: %d:%d %x %lx\n",
proc->pid, current->pid, cmd, arg);*/
trace_binder_ioctl(cmd, arg);
//休眠直到中断唤醒
ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
if (ret)
goto err_unlocked;
binder_lock(__func__);
//获取binder_thread
thread = binder_get_thread(proc);
if (thread == NULL) {
ret = -ENOMEM;
goto err;
}
switch (cmd) {
case BINDER_WRITE_READ://binder读写操作
ret = binder_ioctl_write_read(filp, cmd, arg, thread);
if (ret)
goto err;
break;
case BINDER_SET_MAX_THREADS://设置binder支持最大线程
if (copy_from_user_preempt_disabled(&proc->max_threads, ubuf, sizeof(proc->max_threads))) {
ret = -EINVAL;
goto err;
}
break;
case BINDER_SET_CONTEXT_MGR://设置成为上下文管理者
ret = binder_ioctl_set_ctx_mgr(filp);
if (ret)
goto err;
break;
case BINDER_THREAD_EXIT://binder线程退出,释放binder线程
binder_debug(BINDER_DEBUG_THREADS, "%d:%d exit\n",
proc->pid, thread->pid);
binder_free_thread(proc, thread);
thread = NULL;
break;
case BINDER_VERSION: {//获取binder版本号
struct binder_version __user *ver = ubuf;
if (size != sizeof(struct binder_version)) {
ret = -EINVAL;
goto err;
}
if (put_user_preempt_disabled(BINDER_CURRENT_PROTOCOL_VERSION, &ver->protocol_version)) {
ret = -EINVAL;
goto err;
}
break;
}
default:
ret = -EINVAL;
goto err;
}
ret = 0;
err:
if (thread)
thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
binder_unlock(__func__);
wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
if (ret && ret != -ERESTARTSYS)
pr_info("%d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
trace_binder_ioctl_done(ret);
return ret;
}
首先是获取binder线程,如果不存在则创建一个,并初始化TODO链表
static struct binder_thread *binder_get_thread(struct binder_proc *proc)
{
struct binder_thread *thread = NULL;
struct rb_node *parent = NULL;
struct rb_node **p = &proc->threads.rb_node;//获取当前进程的binder线程红黑树
while (*p) {//遍历,获取binder线程
parent = *p;
thread = rb_entry(parent, struct binder_thread, rb_node);
if (current->pid < thread->pid)
p = &(*p)->rb_left;
else if (current->pid > thread->pid)
p = &(*p)->rb_right;
else
break;
}
if (*p == NULL) {//线程如果不存在,则创建binder线程
thread = kzalloc_preempt_disabled(sizeof(*thread));
if (thread == NULL)
return NULL;
binder_stats_created(BINDER_STAT_THREAD);
thread->proc = proc;
thread->pid = current->pid;
init_waitqueue_head(&thread->wait);
INIT_LIST_HEAD(&thread->todo);//初始化binde线程TODO链表
rb_link_node(&thread->rb_node, parent, p);//加入到binder线程红黑树
rb_insert_color(&thread->rb_node, &proc->threads);
thread->looper |= BINDER_LOOPER_STATE_NEED_RETURN;
thread->return_error = BR_OK;
thread->return_error2 = BR_OK;
}
return thread;
}
接着就是根据不同的命令做不同的处理了,刚才的过程涉及到3个命令,BINDER_VERSION比较简单,直接获取返回,BINDER_SET_CONTEXT_MGR申请成为上下文管理者,BINDER_WRITE_READ数据的读写命令
static int binder_ioctl_set_ctx_mgr(struct file *filp)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
kuid_t curr_euid = current_euid();
if (binder_context_mgr_node != NULL) {
pr_err("BINDER_SET_CONTEXT_MGR already set\n");
ret = -EBUSY;
goto out;
}
ret = security_binder_set_context_mgr(proc->tsk);
if (ret < 0)
goto out;
if (uid_valid(binder_context_mgr_uid)) {
if (!uid_eq(binder_context_mgr_uid, curr_euid)) {
pr_err("BINDER_SET_CONTEXT_MGR bad uid %d != %d\n",
from_kuid(&init_user_ns, curr_euid),
from_kuid(&init_user_ns,
binder_context_mgr_uid));
ret = -EPERM;
goto out;
}
} else {
binder_context_mgr_uid = curr_euid;
}
//为Service Manager创建全局的binder_node节点
binder_context_mgr_node = binder_new_node(proc, 0, 0);//后面的两个0分别是ptr和cookie
if (binder_context_mgr_node == NULL) {
ret = -ENOMEM;
goto out;
}
binder_context_mgr_node->local_weak_refs++;
binder_context_mgr_node->local_strong_refs++;
binder_context_mgr_node->has_strong_ref = 1;
binder_context_mgr_node->has_weak_ref = 1;
out:
return ret;
}
上述会为Serivice Manager创建全局的binder_node节点binder_context_mgr_node,同时把binder_node->ptr置为0,binder_cookie置为0,并把节点插入到binder_nodes链表中。
用的最多的是BINDER_WRITE_READ命令,调用了binder_ioctl_write_read
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;
if (size != sizeof(struct binder_write_read)) {
ret = -EINVAL;
goto out;
}
//从用户空间拷贝数据到内核空间binder_write_read结构体bwr中
if (copy_from_user_preempt_disabled(&bwr, ubuf, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
...
//写缓存有数据
if (bwr.write_size > 0) {
ret = binder_thread_write(proc, thread,
bwr.write_buffer,
bwr.write_size,
&bwr.write_consumed);
trace_binder_write_done(ret);
if (ret < 0) {
bwr.read_consumed = 0;
if (copy_to_user_preempt_disabled(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
//读缓存有数据
if (bwr.read_size > 0) {
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
trace_binder_read_done(ret);
if (!list_empty(&proc->todo))
wake_up_interruptible(&proc->wait);
if (ret < 0) {
if (copy_to_user_preempt_disabled(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
....
//将数据从内核空间拷贝到用户空间binder_write_read结构体
if (copy_to_user_preempt_disabled(ubuf, &bwr, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
out:
return ret;
}
读写操作的核心函数是binder_thread_write和binder_thread_read,分别负责处理写缓存和读缓存,这一块放到后面服务端、客户端和Service Manager数据传输的时候再详细描述。
上面主要是介绍了Service Manager的启动做的一些操作,下面将以AMS的注册和Activity的启动流程为例来说明。