Driver porting: low-level memory allocation

本文探讨了Linux内核2.6版本中针对设备驱动程序进行的内存分配和管理方面的改进,包括新的分配标志、页面级分配策略及内存池机制等内容。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Driver porting: low-level memory allocation

This article is part of the LWNPorting Drivers to 2.6 series.
The 2.5 development series has brought relatively few changes to the way device drivers will allocate and manage memory. In fact, most drivers should work with no changes in this regard. There are a few improvements that have been made, however, that are worth a mention. These include some changes to page allocation, and the new "mempool" interface. Note that the allocation and management of per-CPU data is described in  a separate article.

Allocation flags

The old  <linux/malloc.h> include file is gone; it is now necessary to include  <linux/slab.h> instead.

The GFP_BUFFER allocation flag is gone (it was actually removed in 2.4.6). That will bother few people, since almost nobody used it. There are two new flags which have replaced it: GFP_NOIO andGFP_NOFS. The GFP_NOIO flag allows sleeping, but no I/O operations will be started to help satisfy the request. GFP_NOFS is a bit less restrictive; some I/O operations can be started (writing to a swap area, for example), but no filesystem operations will be performed.

For reference, here is the full set of allocation flags, from the most restrictive to the least::

  • GFP_ATOMIC: a high-priority allocation which will not sleep; this is the flag to use in interrupt handlers and other non-blocking situations.

  • GFP_NOIO: blocking is possible, but no I/O will be performed.

  • GFP_NOFS: no filesystem operations will be performed.

  • GFP_KERNEL: a regular, blocking allocation.

  • GFP_USER: a blocking allocation for user-space pages.

  • GFP_HIGHUSER: for allocating user-space pages where high memory may be used.

The __GFP_DMA and __GFP_HIGHMEM flags still exist and may be added to the above to direct an allocation to a particular memory zone. In addition, 2.5.69 added some new modifiers:

  • __GFP_REPEAT This flag tells the page allocater to "try harder," repeating failed allocation attempts if need be. Allocations can still fail, but failure should be less likely.

  • __GFP_NOFAIL Try even harder; allocations with this flag must not fail. Needless to say, such an allocation could take a long time to satisfy.

  • __GFP_NORETRY Failed allocations should not be retried; instead, a failure status will be returned to the caller immediately.

The __GFP_NOFAIL flag is sure to be tempting to programmers who would rather not code failure paths, but that temptation should be resisted most of the time. Only allocations which truly cannot be allowed to fail should use this flag.

Page-level allocation

For page-level allocations, the  alloc_pages() and  get_free_page() functions (and variants) exist as always. They are now defined in  <linux/gfp.h>, however, and there are a few new ones as well. On NUMA systems, the allocator will do its best to allocate pages on the same node as the caller. To explicitly allocate pages on a different NUMA node, use:

    struct page *alloc_pages_node(int node_id, 
                                  unsigned int gfp_mask, 
				  unsigned int order);

The memory allocator now distinguishes between "hot" and "cold" pages. A hot page is one that is likely to be represented in the processor's cache; cold pages, instead, must be fetched from RAM. In general, it is preferable to use hot pages whenever possible, since they are already cached. Even if the page is to be overwritten immediately (usually the case with memory allocations, after all), hot pages are better - overwriting them will not push some other, perhaps useful, data from the cache. So alloc_pages() and friends will return hot pages when they are available.

On occasion, however, a cold page is preferable. In particular, pages which will be overwritten via a DMA read from a device might as well be cold, since their cache data will be invalidated anyway. In this sort of situation, the __GFP_COLD flag should be passed into the allocation.

Of course, this whole scheme depends on the memory allocator knowing which pages are likely to be hot. Normally, order-zero allocations (i.e. single pages) are assumed to be hot. If you know the state of a page you are freeing, you can tell the allocator explicitly with one of the following:

    void free_hot_page(struct page *page);
    void free_cold_page(struct page *page);

These functions only work with order-zero allocations; the hot/cold status of larger blocks is not tracked.

Memory pools

Memory pools were one of the very first changes in the 2.5 series - they were added to 2.5.1 to support the new block I/O layer. The purpose of mempools is to help out in situations where a memory allocation must succeed, but sleeping is not an option. To that end, mempools pre-allocate a pool of memory and reserve it until it is needed. Mempools make life easier in some situations, but they should be used with restraint; each mempool takes a chunk of kernel memory out of circulation and raises the minimum amount of memory the kernel needs to run effectively.

To work with mempools, your code should include <linux/mempool.h>. A mempool is created withmempool_create():

    mempool_t *mempool_create(int min_nr, 
                              mempool_alloc_t *alloc_fn,
    			      mempool_free_t *free_fn,
			      void *pool_data);
Here,  min_nr is the minimum number of pre-allocated objects that the mempool tries to keep around. The mempool defers the actual allocation and deallocation of objects to user-supplied routines, which have the following prototypes:

    typedef void *(mempool_alloc_t)(int gfp_mask, void *pool_data);
    typedef void (mempool_free_t)(void *element, void *pool_data);

The allocation function should take care not to sleep unless __GFP_WAIT is set in the givengfp_mask. In all of the above cases, pool_data is a private pointer that may be used by the allocation and deallocation functions.

Creators of mempools will often want to use the slab allocator to do the actual object allocation and deallocation. To do that, create the slab, pass it in to mempool_create() as thepool_data value, and give mempool_alloc_slab and mempool_free_slab as the allocation and deallocation functions.

A mempool may be returned to the system by passing it to mempool_destroy(). You must have returned all items to the pool before destroying it, or the mempool code will get upset and oops the system.

Allocating and freeing objects from the mempool is done with:

    void *mempool_alloc(mempool_t *pool, int gfp_mask);
    void mempool_free(void *element, mempool_t *pool);

mempool_alloc() will first call the pool's allocation function to satisfy the request; the pre-allocated pool will only be used if the allocation function fails. The allocation may sleep if the given gfp_mask allows it; it can also fail if memory is tight and the preallocated pool has been exhausted.

Finally, a pool can be resized, if necessary, with:

    int mempool_resize(mempool_t *pool, int new_min_nr, int gfp_mask);

This function will change the size of the pre-allocated pool, using the given gfp_mask to allocate more memory if need be. Note that, as of 2.5.60, mempool_resize() is disabled in the source, since nobody is actually using it.

内容概要:文章基于4A架构(业务架构、应用架构、数据架构、技术架构),对SAP的成本中心和利润中心进行了详细对比分析。业务架构上,成本中心是成本控制的责任单元,负责成本归集与控制,而利润中心是利润创造的独立实体,负责收入、成本和利润的核算。应用架构方面,两者都依托于SAP的CO模块,但功能有所区分,如成本中心侧重于成本要素归集和预算管理,利润中心则关注内部交易核算和获利能力分析。数据架构中,成本中心与利润中心存在多对一的关系,交易数据通过成本归集、分摊和利润计算流程联动。技术架构依赖SAP S/4HANA的内存计算和ABAP技术,支持实时核算与跨系统集成。总结来看,成本中心和利润中心在4A架构下相互关联,共同为企业提供精细化管理和决策支持。 适合人群:从事企业财务管理、成本控制或利润核算的专业人员,以及对SAP系统有一定了解的企业信息化管理人员。 使用场景及目标:①帮助企业理解成本中心和利润中心在4A架构下的运作机制;②指导企业在实施SAP系统时合理配置成本中心和利润中心,优化业务流程;③提升企业对成本和利润的精细化管理水平,支持业务决策。 其他说明:文章不仅阐述了理论概念,还提供了具体的应用场景和技术实现方式,有助于读者全面理解并应用于实际工作中。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值