request_mem_region,ioremap 和phys_to_virt()

本文介绍了Linux内核中I/O内存资源的操作方法,包括request_mem_region()宏用于请求分配指定I/O内存资源,check_mem_region()宏检查资源是否已被占用,release_mem_region()宏释放资源。此外还详细解释了ioremap()函数如何将I/O地址空间映射到内核虚拟地址空间。

         request_mem_region,ioremap 和phys_to_virt()

分类: Linux内核 1193人阅读 评论(0) 收藏 举报

      Linux在头文件include/linux/ioport.h中定义了三个对I/O内存资源进行操作的宏:
(1)request_mem_region()宏,请求分配指定的I/O内存资源。
(2)check_mem_region()宏,检查指定的I/O内存资源是否已被占用。
(3)release_mem_region()宏,释放指定的I/O内存资源。
       这三个宏的定义如下:
#define request_mem_region(start,n,name)   __request_region(&iomem_resource, (start), (n), (name))
#define check_mem_region(start,n)              __check_region(&iomem_resource, (start), (n))
#define release_mem_region(start,n)           __release_region(&iomem_resource, (start), (n))
   其中,参数start是I/O内存资源的起始物理地址(是CPU在RAM物理地址空间中的物理地址),参数n指定I/O内存资源的大小。在请求IO内存资源成功后,开始用ioremap进行映射操作。

  

      void * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flags) 将一个IO地址空间映射到内核的虚拟地址空间上去,便于访问。入口:phys_addr是要映射的起始的IO地址;size是要映射的空间的大小;flag是要映射的IO空间的和权限有关的标志;
      实现:对要映射的IO地址空间进行判断,低PCI/ISA地址不需要重新映射,也不允许用户将IO地址空间映射到正在使用的RAM中,最后申请一个 vm_area_struct结构,调用remap_area_pages填写页表,若填写过程不成功则释放申请的vm_area_struct空间;根据虚拟地址和欲映射的物理地址修改页表,之后内核就可以用这个虚拟地址来访问映射的物理地址了。
      对于直接映射的I/O地址ioremap不做任何事情(比如不带MMU的Uclinux中就直接返回物理地址) 。有了ioremap(和iounmap),设备就可以访问任何I/O内存空间,不论它是否直接映射到虚拟地址空间。但是,这些地址永远不能直接使用(像kmalloc返回的地址那样用),而要用readb这种函数。

/***********************************************************************/

      同样是从物理地址分配得到虚拟地址,还有以下这个函数:phys_to_virt()实际地址转换成虚拟地址,两者是有区别的。用ioremap 和 phys_to_virt 做物理地址于虚拟地址的转换发现:

addr = (unsigned int volatile *)ioremap(0x56000088,12);
printk(KERN_ALERT"%x/n",addr);
addr = (unsigned int volatile *) phys_to_virt(0x56000088);
printk(KERN_ALERT"%x/n",addr);
两个函数返回的addr值不一样。原因在于:
(1) 在内核中phys_to_virt只是给地址减去一个固定的偏移
#ifndef __virt_to_phys
#define __virt_to_phys(x) ((x) - PAGE_OFFSET + PHYS_OFFSET)
#define __phys_to_virt(x) ((x) - PHYS_OFFSET + PAGE_OFFSET)
#endif
注意:PHYS_OFFSET =0x50000000,在带MMU的内核中,PAGE_OFFSET是0xc0000000;不带MMU的内核中,与PHYS_OFFSET 同。
(2) 而ioremap ()的原则就是内核会根据指定的物理地址新建映射页表,物理地址和虚拟地址的关系就由这些页表来搭建!:
这个从ioremap的函数体可以看出来。

参考原文:http://blog.youkuaiyun.com/xdicac/archive/2009/10/30/4743708.aspx

参考原文:http://blog.youkuaiyun.com/unbutun/archive/2009/08/28/4488768.aspx

static int rproc_host_probe(struct platform_device *pdev) { struct resource *mem; struct device *dev = &pdev->dev; struct rproc_host_priv *priv; int ret, virq; struct rproc *rproc; void *__iomem reg_base; void *shm_base, *gen_pool_base; size_t shm_size, gen_pool_size; uint64_t shm_phys, gen_pool_phys; int irq_count, remote_irq_count; #if defined(CONFIG_ZONE_DEVICE) struct dev_pagemap *pgmap; #endif rproc = devm_rproc_alloc(dev, "nbl_rproc_remote", &nebula_rproc_ops, NULL, sizeof(*priv)); if (!rproc) return -ENOMEM; priv = rproc->priv; mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); if (!mem) { ret = -EINVAL; goto err_free_priv; } reg_base = devm_ioremap_resource(dev, mem); if (IS_ERR(reg_base)) { ret = PTR_ERR(reg_base); goto err_free_priv; } mem = platform_get_resource(pdev, IORESOURCE_MEM, 1); if (!mem) { ret = -EINVAL; goto err_unmap_reg; } shm_phys = mem->start; shm_size = resource_size(mem); #if defined(CONFIG_ZONE_DEVICE) /* * It's important not using MEMORY_DEVICE_GENERIC here. For device * type other than MEMORY_DEVICE_PRIVATE and MEMORY_DEVICE_COHERENT, * the page refcount will be reset to 1 after it has been freed (See * free_zone_device_page). We have a race here since memory allocation * can happen in parallel. Before page refcount reset to 1, the freed * page may have been allocated again and referenced by other user(s). * This will cause doulbe-free issue. */ pgmap = &priv->pgmap; pgmap->type = MEMORY_DEVICE_COHERENT; pgmap->range.start = shm_phys; pgmap->range.end = shm_phys + shm_size - 1; pgmap->nr_range = 1; pgmap->ops = &pagemap_ops; pgmap->owner = dev; shm_base = devm_memremap_pages(dev, pgmap); #else shm_base = devm_memremap(dev, shm_phys, shm_size, MEMREMAP_WB); #endif if (!shm_base) { ret = -EINVAL; goto err_unmap_reg; } dev_info(dev, "shared memory mapped @ %px, size %lx\n", shm_base, shm_size); priv->magic = NBL_RPROC_MAGIC; priv->reg_base = reg_base; priv->shm_phys = shm_phys; priv->shm_base = shm_base; priv->shm_size = shm_size; priv->rproc = rproc; INIT_WORK(&priv->attach_work, rproc_host_attach_work); priv->workq = alloc_workqueue("nebula_rproc_host", WQ_MEM_RECLAIM, 0); if (!priv->workq) { ret = -ENOMEM; goto err_unmap_reg; } #if defined(CONFIG_ZONE_DEVICE) priv->dma_buf_pool = gen_pool_create(PAGE_SHIFT, -1); #else priv->dma_buf_pool = gen_pool_create(0, -1); #endif if (priv->dma_buf_pool == NULL) { dev_err(dev, "failed to create gen pool\n"); ret = -ENOMEM; goto err_unmap_shm; } gen_pool_phys = shm_phys + RSC_TAB_SIZE; gen_pool_base = shm_base + RSC_TAB_SIZE; gen_pool_size = shm_size - RSC_TAB_SIZE; memset(gen_pool_base, 0, gen_pool_size); ret = gen_pool_add_virt(priv->dma_buf_pool, (unsigned long)gen_pool_base, gen_pool_phys, gen_pool_size, -1); if (ret) { dev_err(dev, "failed to gen_pool_add_virt, ret=%d\n", ret); goto err_release_gen_pool; } set_dma_ops(dev, &rproc_dma_ops); rproc->auto_boot = false; rproc->state = RPROC_DETACHED; ret = devm_rproc_add(dev, rproc); if (ret) { dev_err(dev, "rproc_add failed\n"); goto err_release_gen_pool; } irq_count = platform_irq_count(pdev); ret = of_property_read_u32(dev->of_node, "remote_irq_count", &remote_irq_count); BUG_ON(ret != 0); if (remote_irq_count) { dev_info(dev, "notify using physical interrupt\n"); priv->notify_with_phys_irq = true; BUG_ON(remote_irq_count != irq_count - 1); } virq = platform_get_irq(pdev, 0); ret = devm_request_irq(&pdev->dev, virq, rproc_ctrl_irq_handler, IRQF_SHARED, dev_name(&pdev->dev), priv); BUG_ON(ret < 0); INIT_LIST_HEAD(&priv->node); priv->dev = dev; priv->num_queues = irq_count - 1; priv->irq_info = kzalloc(irq_count * sizeof(struct irq_info), GFP_KERNEL); if (!priv->irq_info) goto err_remove_rproc; priv->tasks = kzalloc(priv->num_queues * sizeof(struct task_struct*), GFP_KERNEL); if (!priv->tasks) goto err_remove_irq_info; platform_set_drvdata(pdev, priv); ret = sysfs_create_group(&dev->kobj, &rproc_dev_group); WARN_ON(ret != 0); mutex_lock(&rproc_devices_lock); list_add(&priv->node, &rproc_devices); mutex_unlock(&rproc_devices_lock); queue_work(priv->workq, &priv->attach_work); return 0; err_remove_irq_info: kfree(priv->irq_info); err_remove_rproc: rproc_del(rproc); err_release_gen_pool: gen_pool_destroy(priv->dma_buf_pool); err_unmap_shm: devm_memunmap(dev, shm_base); err_unmap_reg: devm_iounmap(dev, reg_base); err_free_priv: rproc_free(rproc); return ret; }
最新发布
07-31
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值