转载:https://blog.youkuaiyun.com/zhonglinzhang/article/details/74626412
1. Memory Allocator
Go 内存分配器基于 tcmalloc 模型,这在 malloc.go 头部注释中有明确说明。
based on tcmalloc http://goog-perftools.sourceforge.net/doc/tcmalloc.html
分配器主要是基于页分配,< 32KB的大约有70类,使用bitmap管理
Cache、Central、Heap是三个核心组件
一个线程有一个cache对应,这个cache用来存放小对象。所有线程共享Central和Heap。
1.1 分配器的数据结构:
- fixalloc: 固定尺寸的堆对象空闲列表分配器,用来管理分配器的存储
- mheap: 堆分配器,以8192byte页进行管理
- mspan: 由mheap管理的页面
- mcentral: 所有给定大小类的mspan集合,Central组件其实也是一个缓存,但它缓存的不是小对象内存块,而是一组一组的内存page(一个page占4k大小)
- mstats: 分配统计
1.2 小对象内存分配
- a. 直接访问本线程的cache,浏览mspan的空闲bitmap,如果有空闲的slot则分配,无需加锁,mache是线程所拥有
- b. mspan没有空闲的slot,将从mcentral列表获得需要尺寸的类,这需要加锁
- c. mcentral的mspan列表空的话,就要从mheap获取页
- d. 如果mheap空或者没有足够的页大小可用,就要从操作系统分配页,至少需要1MB
1.3 大对象内存分配
直接从mheap分配大对象
2. 关键数据结构
2.1 mheap
free和busy数组以span为序号管理多个链表。当central需要时,只需从free找到页数合适的链表
type mheap struct { lock mutex free [_MaxMHeapList]mSpanList // free lists of given length freelarge mSpanList // free lists length >= _MaxMHeapList busy [_MaxMHeapList]mSpanList // busy lists of large objects of given length busylarge mSpanList // busy lists of large objects length >= _MaxMHeapList allspans **mspan // all spans out there gcspans **mspan // copy of allspans referenced by gc marker or sweeper nspan uint32 sweepgen uint32 // sweep generation, see comment in mspan sweepdone uint32 // all spans are swept // span lookup spans **mspan spans_mapped uintptr // Proportional sweep pagesInUse uint64 // pages of spans in stats _MSpanInUse; R/W with mheap.lock spanBytesAlloc uint64 // bytes of spans allocated this cycle; updated atomically pagesSwept uint64 // pages swept this cycle; updated atomically sweepPagesPerByte float64 // proportional sweep ratio; written with lock, read without // TODO(austin): pagesInUse should be a uintptr, but the 386 // compiler can't 8-byte align fields. // Malloc stats. largefree uint64 // bytes freed for large objects (>maxsmallsize) nlargefree uint64 // number of frees for large objects (>maxsmallsize) nsmallfree [_NumSizeClasses]uint64 // number of frees for small objects (<=maxsmallsize) // range of addresses we might see in the heap bitmap uintptr // Points to one byte past the end of the bitmap bitmap_mapped uintptr arena_start uintptr arena_used uintptr // always mHeap_Map{Bits,Spans} before updating arena_end uintptr arena_reserved bool // central free lists for small size classes. // the padding makes sure that the MCentrals are // spaced CacheLineSize bytes apart, so that each MCentral.lock // gets its own cache line. central [_NumSizeClasses]struct { mcentral mcentral pad [sys.CacheLineSize]byte } spanalloc fixalloc // allocator for span* cachealloc fixalloc // allocator for mcache* specialfinalizeralloc fixalloc // allocator for specialfinalizer* specialprofilealloc fixalloc // allocator for specialprofile* speciallock mutex // lock for special record allocators. }
2.2 mspan
free和busy数
type mspan struct { next *mspan // next span in list, or nil if none prev **mspan // previous span's next field, or list head's first field if none startAddr uintptr // address of first byte of span aka s.base() npages uintptr // number of pages in span stackfreelist gclinkptr // list of free stacks, avoids overloading freelist freeindex uintptr // TODO: Look up nelems from sizeclass and remove this field if it // helps performance. nelems uintptr // number of object in the span. allocCache uint64 allocBits *uint8 gcmarkBits *uint8 // sweep generation: // if sweepgen == h->sweepgen - 2, the span needs sweeping // if sweepgen == h->sweepgen - 1, the span is currently being swept // if sweepgen == h->sweepgen, the span is swept and ready to use // h->sweepgen is incremented by 2 after every GC sweepgen uint32 divMul uint32 // for divide by elemsize - divMagic.mul allocCount uint16 // capacity - number of objects in freelist sizeclass uint8 // size class incache bool // being used by an mcache state uint8 // mspaninuse etc needzero uint8 // needs to be zeroed before allocation divShift uint8 // for divide by elemsize - divMagic.shift divShift2 uint8 // for divide by elemsize - divMagic.shift2 elemsize uintptr // computed from sizeclass or from npages unusedsince int64 // first time spotted by gc in mspanfree state npreleased uintptr // number of pages released to the os limit uintptr // end of data in span speciallock mutex // guards specials list specials *special // linked list of special records sorted by offset. baseMask uintptr // if non-0, elemsize is a power of 2, & this will get object allocation base }
3. 代码分析-malloc.go->mallocInit
分配器管理算法依赖连续内存地址,在初始化时,分配器会预留一块巨大的虚拟地址空间。该空间被成三个部分:
- arena: 用户内存实际分配范围。
- bitmap: 为每个地址提供 4bit 标记位,用于垃圾回收操作。
- spans: 记录每个页所对应 span 地址,用于反查和合并操作。
分配器heap的初始化工作,主要是几个span管理连标和central数组的创建
关键代码:
initSizes()
初始化 size class 表
if sys.PtrSize == 8 && (limit == 0 || limit > 1<<30) { arenaSize := round(_MaxMem, _PageSize) bitmapSize = arenaSize / (sys.PtrSize * 8 / 2) spansSize = arenaSize / _PageSize * sys.PtrSize spansSize = round(spansSize, _PageSize)
mheap_.spans = (**mspan)(unsafe.Pointer(p1)) mheap_.bitmap = p1 + spansSize + bitmapSize if sys.PtrSize == 4 { // Set arena_start such that we can accept memory // reservations located anywhere in the 4GB virtual space. mheap_.arena_start = 0 } else { mheap_.arena_start = p1 + (spansSize + bitmapSize) } mheap_.arena_end = p + pSize mheap_.arena_used = p1 + (spansSize + bitmapSize) mheap_.arena_reserved = reserved
初始化好heap的arena以及bitmap
4. 代码分析-mheap.go->init
func (h *mheap) init(spans_size uintptr) { h.spanalloc.init(unsafe.Sizeof(mspan{}), recordspan, unsafe.Pointer(h), &memstats.mspan_sys) h.cachealloc.init(unsafe.Sizeof(mcache{}), nil, nil, &memstats.mcache_sys) h.specialfinalizeralloc.init(unsafe.Sizeof(specialfinalizer{}), nil, nil, &memstats.other_sys) h.specialprofilealloc.init(unsafe.Sizeof(specialprofile{}), nil, nil, &memstats.other_sys) // h->mapcache needs no init for i := range h.free { h.free[i].init() h.busy[i].init() } h.freelarge.init() h.busylarge.init() for i := range h.central { h.central[i].mcentral.init(int32(i)) } sp := (*slice)(unsafe.Pointer(&h_spans)) sp.array = unsafe.Pointer(h.spans) sp.len = int(spans_size / sys.PtrSize) sp.cap = int(spans_size / sys.PtrSize) }
- 初始化管理类型的固定分配器
- 初始化 free / busy 数组
- 初始化 large 链表
- 初始化所有等级的 central
4. 代码分析-malloc.go->newobject
func newobject(typ *_type) unsafe.Pointer { return mallocgc(typ.size, typ, true) }
函数mallocgc主要工作总结:
- 大对象直接从 heap 获取 span。
- 小对象从 cache.alloc[sizeclass].freelist 获取 object
- 微小对象组合使用 cache.tiny object。
大对象分配调用函数:largeAlloc
func largeAlloc(size uintptr, needzero bool) *mspan { if size+_PageSize < size { throw("out of memory") } npages := size >> _PageShift if size&_PageMask != 0 { npages++ } deductSweepCredit(npages*_PageSize, npages) s := mheap_.alloc(npages, 0, true, needzero) if s == nil { throw("out of memory") } s.limit = s.base() + size heapBitsForSpan(s.base()).initSpan(s) return s }
type mspan struct { // sweep generation: // if sweepgen == h->sweepgen - 2, the span needs sweeping // if sweepgen == h->sweepgen - 1, the span is currently being swept // if sweepgen == h->sweepgen, the span is swept and ready to use // h->sweepgen is incremented by 2 after every GC sweepgen uint32 }
在 heap 里闲置的 span 不会被垃圾回收器关注,但 central 里的 span 却有可能正在被清理。所以当 cache 从 central 提取 span 时,该属性值就非常重要。