The Android ION memory allocator

ION是Google为解决Android设备上碎片化的内存管理接口而引入的一种通用内存管理方案。它不仅管理内存池,还允许客户端之间共享缓冲区。本文介绍了ION的基本概念、其与DMA缓冲区共享框架的区别以及在多媒体应用程序中的应用。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

http://lwn.net/Articles/480055/

 

Back in December 2011, LWN reviewed the list of Android kernel patches in the linux-next staging directory. The merging of these drivers, one of which is a memory allocator called PMEM, holds the promise that the mainline kernel release can one day boot an Android user space. Since then, it has become clear that PMEM is considered obsolete and will be replaced by the ION memory manager. ION is a generalized memory manager that Google introduced in the Android 4.0 ICS (Ice Cream Sandwich) release to address the issue of fragmented memory management interfaces across different Android devices. There are at least three, probably more, PMEM-like interfaces. On Android devices using NVIDIA Tegra, there is "NVMAP"; on Android devices using TI OMAP, there is "CMEM"; and on Android devices using Qualcomm MSM, there is "PMEM" . All three SoC vendors are in the process of switching to ION.

This article takes a look at ION, summarizing its interfaces to user space and to kernel-space drivers. Besides being a memory pool manager, ION also enables its clients to share buffers, hence it treads the same ground as  the DMA buffer sharing framework from Linaro  (DMABUF). This article will end with a comparison of the two buffer sharing schemes.

ION heaps
Like its PMEM-like predecessors, ION manages one or more memory pools, some of which are set aside at boot time to combat fragmentation or to serve special hardware needs. GPUs, display controllers, and cameras are some of the hardware blocks that may have special memory requirements. ION presents its memory pools as ION heaps. Each type of Android device can be provisioned with a different set of ION heaps according to the memory requirements of the device. The provider of an ION heap must implement the following set of callbacks:
   struct ion_heap_ops { 	int (*allocate) (struct ion_heap *heap, 			 struct ion_buffer *buffer, unsigned long len, 			 unsigned long align, unsigned long flags); 	void (*free) (struct ion_buffer *buffer); 	int (*phys) (struct ion_heap *heap, struct ion_buffer *buffer, 		     ion_phys_addr_t *addr, size_t *len); 	struct scatterlist *(*map_dma) (struct ion_heap *heap, 			 struct ion_buffer *buffer); 	void (*unmap_dma) (struct ion_heap *heap,  	         struct ion_buffer *buffer); 	void * (*map_kernel) (struct ion_heap *heap,  	         struct ion_buffer *buffer); 	void (*unmap_kernel) (struct ion_heap *heap,  	         struct ion_buffer *buffer); 	int (*map_user) (struct ion_heap *heap, struct ion_buffer *buffer, 			 struct vm_area_struct *vma);    }; 
Briefly,  allocate()  and  free()  obtain or release an  ion_buffer  object from the heap. A call to  phys()  will return the physical address and length of the buffer, but only for physically-contiguous buffers. If the heap does not provide physically contiguous buffers, it does not have to provide this callback. Here ion_phys_addr_t  is a typedef of  unsigned long , and will, someday, be replaced by  phys_addr_t  in  include/linux/types.h . The  map_dma()  and  unmap_dma() callbacks cause the buffer to be prepared (or unprepared) for DMA. The  map_kernel()  and  unmap_kernel()  callbacks map (or unmap) the physical memory into the kernel virtual address space. A call to  map_user()  will map the memory to user space. There is no  unmap_user()  because the mapping is represented as a file descriptor in user space. The closing of that file descriptor will cause the memory to be unmapped from the calling process.

The default ION driver (which can be cloned from here) offers three heaps as listed below:

   ION_HEAP_TYPE_SYSTEM:        memory allocated via vmalloc_user().    ION_HEAP_TYPE_SYSTEM_CONTIG: memory allocated via kzalloc.    ION_HEAP_TYPE_CARVEOUT:	carveout memory is physically contiguous and set aside at boot. 
Developers may choose to add more ION heaps. For example,  this NVIDIA patch  was submitted to add ION_HEAP_TYPE_IOMMU for hardware blocks equipped with an IOMMU.
Using ION from user space
Typically, user space device access libraries will use ION to allocate large contiguous media buffers. For example, the still camera library may allocate a capture buffer to be used by the camera device. Once the buffer is fully populated with video data, the library can pass the buffer to the kernel to be processed by a JPEG encoder hardware block.

A user space C/C++ program must have been granted access to the /dev/ion device before it can allocate memory from ION. A call to open("/dev/ion", O_RDONLY) returns a file descriptor as a handle representing an ION client. Yes, one can allocate writable memory with an O_RDONLY open. There can be no more than one client per user process. To allocate a buffer, the client needs to fill in all the fields except the handle field in this data structure:

   struct ion_allocation_data {         size_t len;         size_t align;         unsigned int flags;         struct ion_handle *handle;    } 
The  handle  field is the output parameter, while the first three fields specify the alignment, length and flags as input parameters. The  flags  field is a bit mask indicating one or more ION heaps to allocate from, with the fallback ordered according to which ION heap was first added via calls to  ion_device_add_heap() during boot. In the default implementation,  ION_HEAP_TYPE_CARVEOUT  is added before  ION_HEAP_TYPE_CONTIG . The flags of  ION_HEAP_TYPE_CONTIG | ION_HEAP_TYPE_CARVEOUT  indicate the intention to allocate from  ION_HEAP_TYPE_CARVEOUT  with fallback to  ION_HEAP_TYPE_CONTIG .

User-space clients interact with ION using the ioctl() system call interface. To allocate a buffer, the client makes this call:

   int ioctl(int client_fd, ION_IOC_ALLOC, struct ion_allocation_data *allocation_data) 
This call returns a buffer represented by  ion_handle  which is not a CPU-accessible buffer pointer. The handle can only be used to obtain a file descriptor for buffer sharing as follows:
   int ioctl(int client_fd, ION_IOC_SHARE, struct ion_fd_data *fd_data); 
Here  client_fd  is the file descriptor corresponding to  /dev/ion , and  fd_data  is a data structure with an input  handle  field and an output  fd  field, as defined below:
   struct ion_fd_data {         struct ion_handle *handle;         int fd;    } 
The  fd  field is the file descriptor that can be passed around for sharing. On Android devices the  BINDER  IPC mechanism may be used to send  fd  to another process for sharing. To obtain the shared buffer, the second user process must obtain a client handle first via the  open("/dev/ion", O_RDONLY)  system call. ION tracks its user space clients by the PID of the process (specifically, the PID of the thread that is the "group leader" in the process). Repeating the open("/dev/ion", O_RDONLY)  call in the same process will get back another file descriptor corresponding to the same client structure in the kernel.

To free the buffer, the second client needs to undo the effect of mmap() with a call to munmap(), and the first client needs to close the file descriptor it obtained via ION_IOC_SHARE, and call ION_IOC_FREE as follows:

     int ioctl(int client_fd, ION_IOC_FREE, struct ion_handle_data *handle_data); 
Here  ion_handle_data  holds the handle as shown below:
     struct ion_handle_data { 	     struct ion_handle *handle;      } 
The  ION_IOC_FREE  command causes the handle's reference counter to be decremented by one. When this reference counter reaches zero, the  ion_handle object gets destroyed and the affected ION bookkeeping data structure is updated.

User processes can also share ION buffers with a kernel driver, as explained in the next section.

Sharing ION buffers in the kernel

In the kernel, ION supports multiple clients, one for each driver that uses the ION functionality. A kernel driver calls the following function to obtain an ION client handle:

   struct ion_client *ion_client_create(struct ion_device *dev,                     unsigned int heap_mask, const char *debug_name) 

The first argument, dev, is the global ION device associated with /dev/ion; why a global device is needed, and why it must be passed as a parameter, is not entirely clear. The second argument, heap_mask, selects one or more ION heaps in the same way as the ion_allocation_data. The flags field was covered in the previous section. For smart phone use cases involving multimedia middleware, the user process typically allocates the buffer from ION, obtains a file descriptor using the ION_IOC_SHARE command, then passes the file desciptor to a kernel driver. The kernel driver calls ion_import_fd() which converts the file descriptor to an ion_handle object, as shown below:

    struct ion_handle *ion_import_fd(struct ion_client *client, int fd_from_user); 
The  ion_handle  object is the driver's client-local reference to the shared buffer. The  ion_import_fd()  call looks up the physical address of the buffer to see whether the client has obtained a handle to the same buffer before, and if it has, this call simply increments the reference counter of the existing handle.

Some hardware blocks can only operate on physically-contiguous buffers with physical addresses, so affected drivers need to convert ion_handle to a physical buffer via this call:

   int ion_phys(struct ion_client *client, struct ion_handle *handle, 	       ion_phys_addr_t *addr, size_t *len) 

Needless to say, if the buffer is not physically contiguous, this call will fail.

When handling calls from a client, ION always validates the input file descriptor, client and handle arguments. For example, when importing a file descriptor, ION ensures the file descriptor was indeed created by an ION_IOC_SHARE command. When ion_phys() is called, ION validates whether the buffer handle belongs to the list of handles the client is allowed to access, and returns error if the handle is not on the list. This validation mechanism reduces the likelihood of unwanted accesses and inadvertent resource leaks.

ION provides debug visibility through debugfs. It organizes debug information under /sys/kernel/debug/ion, with bookkeeping information in stored files associated with heaps and clients identified by symbolic names or PIDs.

Comparing ION and DMABUF

ION and DMABUF share some common concepts. The dma_buf concept is similar to ion_buffer, while dma_buf_attachment serves a similar purpose as ion_handle. Both ION and DMABUF use anonymous file descriptors as the objects that can be passed around to provide reference-counted access to shared buffers. On the other hand, ION focuses on allocating and freeing memory from provisioned memory pools in a manner that can be shared and tracked, while DMABUF focuses more on buffer importing, exporting and synchronization in a manner that is consistent with buffer sharing solutions on non-ARM architectures.

The following table presents a feature comparison between ION and DMABUF:

FeatureIONDMABUF
Memory Manager RoleION replaces PMEM as the manager of provisioned memory pools. The list of ION heaps can be extended per device.DMABUF is a buffer sharing framework, designed to integrate with the memory allocators in DMA mapping frameworks, like the work-in-progress DMA-contiguous allocator, also known as the Contiguous Memory Allocator (CMA). DMABUF exporters have the option to implement custom allocators.
User Space Access ControlION offers the /dev/ion interface for user-space programs to allocate and share buffers. Any user program with ION access can cripple the system by depleting the ION heaps. Android checks user and group IDs to block unauthorized access to ION heaps.DMABUF offers only kernel APIs. Access control is a function of the permissions on the devices using the DMABUF feature.
Global Client and Buffer DatabaseION contains a device driver associated with /dev/ion. The device structure contains a database that tracks the allocated ION buffers, handles and file descriptors, all grouped by user clients and kernel clients. ION validates all client calls according to the rules of the database. For example, there is a rule that a client cannot have two handles to the same buffer.The DMA debug facility implements a global hashtable,dma_entry_hash, to track DMA buffers, but only when the kernel was built with the CONFIG_DMA_API_DEBUG option.
Cross-architecture UsageION usage today is limited to architectures that run the Android kernel.DMABUF usage is cross-architecture. The DMA mapping redesign preparation patchset modified the DMA mapping code in 9 architectures besides the ARM architecture.
Buffer SynchronizationION considers buffer synchronization to be an orthogonal problem.DMABUF provides a pair of APIs for synchronization. The buffer-user calls dma_buf_map_attachment() whenever it wants to use the buffer for DMA . Once the DMA for the current buffer-user is over, it signals 'end-of-DMA' to the exporter via a call to dma_buf_unmap_attachment() .
Delayed Buffer AllocationION allocates the physical memory before the buffer is shared.DMABUF can defer the allocation until the first call todma_buf_map_attachment(). The exporter of DMA buffer has the opportunity to scan all client attachments, collate their buffer constraints, then choose the appropriate backing storage.

ION and DMABUF can be separately integrated into multimedia applications written using the Video4Linux2 API. In the case of ION, these multimedia programs tend to use PMEM now on Android devices, so switching to ION from PMEM should have a relatively small impact.

Integrating DMABUF into Video4Linux2 is another story. It has taken ten patches to integrate the videobuf2 mechanism with DMABUF; in fairness, many of these revisions were the result of changes to DMABUF as that interface stabilized. The effort should pay dividends in the long run because the DMABUF-based sharing mechanism is designed with DMA mapping hooks for CMA and IOMMU. CMA and IOMMU hold the promise to reduce the amount of carveout memory that it takes to build an Android smart phone. In this email, Andrew Morton was urging the completion of the patch review process so that CMA can get through the 3.4 merge window.

Even though ION and DMABUF serve similar purposes, the two are not mutually exclusive. The Linaro Unified Memory Management team has started to integrate CMA into ION. To reach the state where a release of the mainline kernel can boot the Android user space, the /dev/ion interface to user space must obviously be preserved. In the kernel though, ION drivers may be able to use some of the DMABUF APIs to hook into CMA and IOMMU to take advantage of the capabilities offered by those subsystems. Conversely, DMABUF might be able to leverage ION to present a unified interface to user space, especially to the Android user space. DMABUF may also benefit from adopting some of the ION heap debugging features in order to become more developer friendly. Thus far, many signs indicate that Linaro, Google, and the kernel community are working together to bring the combined strength of ION and DMABUF to the mainline kernel.

转载于:https://www.cnblogs.com/nsnow/p/3863222.html

资源下载链接为: https://pan.quark.cn/s/22ca96b7bd39 在 IT 领域,文档格式转换是常见需求,尤其在处理多种文件类型时。本文将聚焦于利用 Java 技术栈,尤其是 Apache POI 和 iTextPDF 库,实现 doc、xls(涵盖 Excel 2003 及 Excel 2007+)以及 txt、图片等格式文件向 PDF 的转换,并实现在线浏览功能。 先从 Apache POI 说起,它是一个强大的 Java 库,专注于处理 Microsoft Office 格式文件,比如 doc 和 xls。Apache POI 提供了 HSSF 和 XSSF 两个 API,其中 HSSF 用于读写老版本的 BIFF8 格式(Excel 97-2003),XSSF 则针对新的 XML 格式(Excel 2007+)。这两个 API 均具备读取和写入工作表、单元格、公式、样式等功能。读取 Excel 文件时,可通过创建 HSSFWorkbook 或 XSSFWorkbook 对象来打开相应格式的文件,进而遍历工作簿中的每个 Sheet,获取行和列数据。写入 Excel 文件时,创建新的 Workbook 对象,添加 Sheet、Row 和 Cell,即可构建新 Excel 文件。 再看 iTextPDF,它是一个用于生成和修改 PDF 文档的 Java 库,拥有丰富的 API。创建 PDF 文档时,借助 Document 对象,可定义页面尺寸、边距等属性来定制 PDF 外观。添加内容方面,可使用 Paragraph、List、Table 等元素将文本、列表和表格加入 PDF,图片可通过 Image 类加载插入。iTextPDF 支持多种字体和样式,可设置文本颜色、大小、样式等。此外,iTextPDF 的 TextRenderer 类能将 HTML、
<think>我们正在处理用户关于Linux内核配置中DMAContiguousMemoryAllocator(CMA)的问题。用户希望了解如何启用或解决CMA缺失的问题。根据引用内容,我们可以总结以下几点关键信息:1.CMA(连续内存分配器)用于为设备驱动分配大块连续物理内存,当设备驱动不使用时,其他内核模块可以借用(要求申请可移动类型的页),当设备驱动需要时,再迁移已分配的页以形成连续内存块[^4]。2.引用[1]提到,在没有CMA或系统不允许阻塞的情况下,可以通过`__get_free_pages()`从DMA/普通区域(低端内存)分配内存,但这仅限于32位地址空间,且已经在低端内存区域由内核在启动时映射。3.引用[3]提到了一些解决方案,包括:汇总所有CMA需求并分配一个统一的CMA区域(对于单个CMA用户系统无效);调整`ARCH_FORCE_MAX_ORDER`配置参数以减少内存浪费(但会影响大页分配);对于较小的预留内存,CMA可以直接从伙伴分配器分配内存。因此,针对用户的问题,我们需要提供在内核配置中启用CMA以及解决相关问题的步骤和注意事项。###步骤指南####1.启用CMA在配置内核时,需要确保以下配置选项被启用:-`CONFIG_CMA`:启用连续内存分配器(CMA)功能。-`CONFIG_DMA_CMA`:如果用于DMA,则启用此选项(依赖`CONFIG_CMA`)。-其他与设备相关的CMA配置(如`CONFIG_CMA_SIZE_MBYTES`设置CMA区域大小)。**配置方法**:-使用`makemenuconfig`(或类似配置工具)进入内核配置界面。-导航至以下路径:```DeviceDrivers->GenericDriverOptions->ContiguousMemoryAllocator(CONFIG_CMA)[选中]DMAContiguousMemoryAllocator(CONFIG_DMA_CMA)[选中]```-在`CONFIG_CMA`下可以设置CMA区域的大小(例如`CONFIG_CMA_SIZE_MBYTES`,默认值可能为16MB,根据需求调整)。####2.解决CMA缺失问题如果系统配置中没有启用CMA,或者CMA区域不足,可以尝试以下方法:-**汇总CMA需求并分配统一区域**:在设备树(DeviceTree)或内核启动参数中指定一个足够大的CMA区域。例如,在内核启动参数中添加:```cma=128M```这将保留128MB的CMA区域。-**调整页块大小以减少浪费**:通过设置`ARCH_FORCE_MAX_ORDER`为较小值(如7),可以减小伙伴系统中最大可分配页块的大小,从而减少内存浪费。但注意这会影响大页分配(如透明大页THP)。在配置内核时,可以通过以下方式设置:```Processortypeandfeatures->Maximumzoneorder(ARCH_FORCE_MAX_ORDER)[设置为7]```-**对于小内存分配直接从伙伴系统获取**:如果设备驱动需要的内存块较小,可以考虑修改驱动代码,使其在CMA分配失败时回退到伙伴分配器(如使用`dma_alloc_coherent()`,它会尝试多种分配方式包括伙伴系统)。但此方法需要驱动支持,且不是通用解决方案。####3.验证CMA配置-系统启动后,检查`/proc/meminfo`:```cat/proc/meminfo|grepCma```输出应显示CMA区域的大小和已使用情况。-检查内核启动日志(`dmesg`)中是否有CMA初始化的信息,例如:```Reserved:128MB(CMA:128MB)```###注意事项-启用CMA可能会减少系统可用内存(因为CMA区域被预留),请根据实际需求设置大小。-调整`ARCH_FORCE_MAX_ORDER`可能影响系统性能,需谨慎评估。-如果设备驱动要求物理地址连续且较大,应确保CMA区域足够大,避免分配失败。###示例配置假设我们需要为某个DMA设备预留64MB的CMA区域,可以这样操作:1.内核配置中启用`CONFIG_CMA`和`CONFIG_DMA_CMA`,并设置`CONFIG_CMA_SIZE_MBYTES=64`。2.或者在内核启动参数中添加`cma=64M`。###引用说明根据引用内容,CMA的启用和配置涉及内核编译选项和启动参数[^1][^3][^4]。同时,引用[3]提供了几种解决CMA问题的思路,包括调整内核配置参数[^3]。§§相关问题§§1.如何在内核启动参数中设置CMA区域大小?2.如何验证CMA区域是否成功启用?3.调整`ARCH_FORCE_MAX_ORDER`对系统有哪些影响?4.设备驱动如何使用CMA分配的连续内存?</think>### Linux内核中启用DMA连续内存分配器(CMA)的配置指南 #### 1. **启用CMA内核配置选项** 在`make menuconfig`中配置: ```bash Device Drivers ---> Generic Driver Options ---> [*] Contiguous Memory Allocator (CMA) # 核心CMA支持 [*] DMA Contiguous Memory Allocator # DMA专用CMA (128) CMA size in MegaBytes # 设置CMA区域大小(单位MB) ``` **说明**: - `CONFIG_CMA`是基础支持,必须启用 - `CONFIG_DMA_CMA`提供DMA专用分配接口 - 大小建议设为设备DMA需求的1.5-2倍[^4] #### 2. **设备树(Device Tree)配置** 在设备树中为特定设备分配CMA区域: ```dts reserved-memory { #address-cells = <2>; #size-cells = <2>; ranges; mydevice_cma: linux,cma@90000000 { compatible = "shared-dma-pool"; reusable; size = <0x10000000>; // 256MB alignment = <0x2000>; linux,cma-default; }; }; mydevice { memory-region = <&mydevice_cma>; }; ``` #### 3. **解决CMA分配失败的方案** 当遇到"CMA not available"错误时: **方案1:调整内存区块大小** ```bash Processor type and features ---> (7) Maximum zone order (ARCH_FORCE_MAX_ORDER) # 默认值通常为11 ``` 减小此值可降低内存碎片但影响大页分配[^3] **方案2:预留统一CMA区域** 在内核启动参数添加: ```bash cma=256M@32M // 在32MB地址处保留256MB区域 ``` **方案3:伙伴分配器回退** 修改驱动代码,当CMA失败时回退到伙伴系统: ```c if (dma_set_coherent_mask(dev, DMA_BIT_MASK(64))) { dev_warn(dev, "CMA failed, using buddy allocator"); ptr = __get_free_pages(GFP_KERNEL, order); } ``` #### 4. **验证CMA状态** ```bash # 查看CMA区域 cat /proc/meminfo | grep Cma # CmaTotal: 262144 kB # CmaFree: 131072 kB # 检查设备DMA映射 dmesg | grep -i "dma: allocated" # [ 1.234567] mydevice dma: allocated 64MB for DMA ``` #### 5. **关键注意事项** 1. CMA仅分配**可移动页面**,确保内核配置启用: ```bash Memory Management options ---> [*] Allow for memory compaction [*] Contiguous Memory Allocator ``` 2. 32位系统需确保CMA在低端内存区(DMA zone): ```c dma_declare_contiguous(dev, size, PHYS_OFFSET, 0); ``` 3. 避免SWIOTLB冲突: ```bash Boot options ---> [ ] Software IO TLB (SWIOTLB) ``` > CMA通过"预留-借用-迁移"机制平衡设备独占和系统内存使用[^4]。配置时应综合考虑设备DMA需求(`dma_mask`)和系统内存压力,建议优先使用设备树声明专用区域[^1][^2]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值