一、问题描述
在LoongArch上适配我司GPU时,进入桌面后,打开控制台,控制台显示异常,且无法运行xgltest1。查看内核日志,发现报GPU mmu page fault。为了简化场景,不启动桌面,运行rgx_triangle_test,也运行失败,查看内核日志也报GPU mmu page fault。
系统及内核版本如下:
root@cqjcdl:~# lsb_release -a
No LSB modules are available.
Distributor ID: Kylin
Description: Kylin V10 SP1
Release: v10
Codename: kylin
root@cqjcdl:~# uname -a
Linux cqjcdl 5.4.18-77-generic #66-KYLINOS SMP Thu Dec 1 09:28:47 UTC 2022 loongarch64 loongarch64 loongarch64 GNU/Linux
二、问题分析
日志分析
应用启动后,分配GDDR,此时就会建立页表,将GDDR物理内存(GPA,此处的G为GPU缩写)与设备虚拟地址进行映射(GVA),所以GPU运行时不会出现page fault,且没有实现支持GPU page fault的机制。推测最可能因为以下几种情况:
- 页表所在内存被踩。
- 创建页表出现错误,导致映射错误。
- 设备虚拟地址错误。
内核日志如下:
26819561 Apr 7 17:18:31 cqjcdl kernel: [ 55.917643][ 3] PVR_K: 2323: RGX FW OS 0 - State: active; Freelists: Not Ok; Priority: 0; MTS on;
26819562 Apr 7 17:18:31 cqjcdl kernel: [ 55.917676][ 3] PVR_K: 2323: Number of HWR: GP(0/0+0), TDM(0/0+0), GEOM(1/1+0), 3D(0/0+0), CDM(0/0+0), FALSE(0,0,0,0,0 )
26819563 Apr 7 17:18:31 cqjcdl kernel: [ 55.917677][ 3] PVR_K: 2323: DM 0 (GP)
26819564 Apr 7 17:18:31 cqjcdl kernel: [ 55.917709][ 3] PVR_K: 2323: DM 1 (HWRflags 0x00000000: working;)
26819565 Apr 7 17:18:31 cqjcdl kernel: [ 55.917728][ 3] PVR_K: 2323: DM 2 (HWRflags 0x00000000: working;)
26819566 Apr 7 17:18:31 cqjcdl kernel: [ 55.917753][ 3] PVR_K: 2323: Recovery 1: Core = 2, PID = 2480 / rgx_triangle_te, frame = 0, HWRTData = 0x10074E80, E ventStatus = 0x00000000, Guilty Lockup
26819567 Apr 7 17:18:31 cqjcdl kernel: [ 55.917757][ 3] PVR_K: 2323: CRTimer = 0x000015122D31, OSTimer = 55.919260968, CyclesElapsed = 23040
26819568 Apr 7 17:18:31 cqjcdl kernel: [ 55.917763][ 3] PVR_K: 2323: PreResetTimeInCycles = 24320, HWResetTimeInCycles = 54528, TotalResetTimeIn Cycles = 78848
26819569 Apr 7 17:18:31 cqjcdl kernel: [ 55.917765][ 3] PVR_K: 2323: Active PDS DM USCs = 0x00000002
26819570 Apr 7 17:18:31 cqjcdl kernel: [ 55.917767][ 3] PVR_K: 2323: DMs stalled waiting on PDS Store space = 0x00000000
26819571 Apr 7 17:18:31 cqjcdl kernel: [ 55.917772][ 3] PVR_K: 2323: MMU (Core) - FAULT:
26819572 Apr 7 17:18:31 cqjcdl kernel: [ 55.917777][ 3] PVR_K: 2323: * MMU status (0x820F001100E54089 | 0x00000175): PC = 15, Reading from 0x1100E54080, MCU L1(IP1 PDS), Fault (Page Catalog).
26819573 Apr 7 17:18:31 cqjcdl kernel: [ 55.917784][ 3] MMU_CheckFaultAddress 3934 psDevVAddr->uiAddr=0x1100e54080 psDevAttrs->psTopLevelDevVAddrConfig->uiPCIn dexMask=0xffc0000000 psDevAttrs->psTopLevelDevVAddrConfig->uiPCIndexShift=30
26819574 Apr 7 17:18:31 cqjcdl kernel: [ 55.917785][ 3] MMU_CheckFaultAddress 3940 psMMULevelData->uiBytesPerEntry=4, psMMULevelData->ui32Index=68 psLevel->ui3 2NumOfEntries=1024
26819575 Apr 7 17:18:31 cqjcdl kernel: [ 55.917788][ 3] MMU_CheckFaultAddress 3956 psMMULevelData->uiBytesPerEntry=4, ui32PCIndex=68 pui32Ptr[ui32PCIndex]=0x0 pui32Ptr=0x80000e0181744000
分析日志发现是GPU访问设备虚拟地址0x1100E54080时,出现page fault,且在访问第一级页表Page Catalog时就出现了page fault。GPU为三级页表,分别为Page Catalog(PC)、Page Directory(PD)、PT(Page Table)。
创建页表分析
基于以上分析,先验证下创建页表是否出现错误(这点最容易验证),分析代码后发现有以下流程会设置页表项:
MMU_MapPages->_SetupPTE
MMU_MapPMRFast->_SetupPTE
在函数 MMU_MapPages和MMU_MapPMRFast中加入调试信息,打印出GVA到GPA的映射。发现并没有为出现page fault时的GVA创建映射。
查找设备虚拟地址
排除了创建页表出现错误,且经过多次测试发现,出现page fault时的设备虚拟地址低32位都是不变的,而高8位可能会不同,所以怀疑设备虚拟地址错误。查看了rgx_triangle_test的帮助选项,发现有如下选项可以模拟page fault:
-hwr {u} {n} Simulate HW unit pagefault lockup ('u' = vdm|vs|vspdsfetch|ps|pbe|blit) or use-after-free race condition ('u' = vspdsfetch_race) every 'n' kicks
运行如下命令后出现page fault且与之前的错误日志一致:
rgx_triangle_test -hwr vspdsfetch 1
分析rgx_triangle_test代码后,有如下函数:
/* A bad address which will cause a page fault - used for testing Hardware Recovery */
#define HWR_PAGE_FAULT_ADDRESS IMG_UINT64_C(0xFFBABABAB0)
//生成pds shader和usc shader,并写入psVertexPDSBuffer和psVertexUSCBuffer
void rtt_WriteVertexShaderCode(RTT_DEVMEM_BUFFER *psVertexUSCBuffer,
RTT_DEVMEM_BUFFER *psVertexPDSBuffer,
RTT_DCE_COMP_CTRL *psDCECompControl,
IMG_DEV_VIRTADDR *psVerticesDevVAddr,
IMG_UINT32 ui32SlowdownFactor,
IMG_BOOL bTestVertexHWR,
IMG_BOOL bTestHWRVSPDSFetch,
IMG_BOOL bTestOverrunVS,
IMG_BOOL bDebugProgramAddrs)
{
PDSGEN_VERTEX_SHADER_PROGRAM sVertexShaderPDSProgram = {0};
IMG_UINT32 *pui32VertexUSCBuffer, *pui32VertexUSCBufferInit;
IMG_UINT32 *pui32VertexPDSBuffer, *pui32VertexPDSBufferInit;
IMG_DEV_VIRTADDR sUSCDevVAddr;
PVR_UNREFERENCED_PARAMETER(ui32SlowdownFactor);
pui32VertexUSCBufferInit = pui32VertexUSCBuffer = rtt_GetCurrentCPUVAddr(psVertexUSCBuffer);
pui32VertexPDSBufferInit = pui32VertexPDSBuffer = rtt_GetCurrentCPUVAddr(psVertexPDSBuffer);
if (bDebugProgramAddrs)
{
IMG_DEV_VIRTADDR sDevVAddr;
rtt_GetBufferDevVAddr(psVertexPDSBuffer, &sDevVAddr);
DPF(IMG_DEV_VIRTADDR_FMTSPEC ": Vertex shader PDS Program\n", sDevVAddr.uiAddr);
rtt_GetBufferDevVAddr(psVertexUSCBuffer, &sDevVAddr);
DPF(IMG_DEV_VIRTADDR_FMTSPEC ": Vertex shader USC Program\n", sDevVAddr.uiAddr);
}
/**************************************************************************
* Set up the PDS vertex shader program
**************************************************************************/
/* Setup the input program structure */
sVertexShaderPDSProgram.ui32NumStreams = 1;
sVertexShaderPDSProgram.bIterateVtxID = IMG_FALSE;
sVertexShaderPDSProgram.bIterateInstanceID = IMG_FALSE;
sVertexShaderPDSProgram.bIteratePrimID = IMG_FALSE;
sVertexShaderPDSProgram.ui32NumElements = 1;
/* Set up the vertex streams and elements in the program */
sVertexShaderPDSProgram.asStreams[0].bInstanceData = IMG_FALSE;
sVertexShaderPDSProgram.asStreams[0].ui32InstanceDivisor = 0;
sVertexShaderPDSProgram.asStreams[0].bCurrentState = IMG_FALSE;
//sVertexShaderPDSProgram.asStreams[0].ui32Multiplier = 0;
if (bTestHWRVSPDSFetch)
{
sVertexShaderPDSProgram.asStreams[0].uAddress.uiAddr = HWR_PAGE_FAULT_ADDRESS;
}
else
{
sVertexShaderPDSProgram.asStreams[0].uAddress.uiAddr = psVerticesDevVAddr->uiAddr; //psVerticesDevVAddr->uiAddr=0x8000e54018
}
sVertexShaderPDSProgram.asStreams[0].bUseDDMADT = IMG_FALSE;
sVertexShaderPDSProgram.asStreams[0].ui32BufferSizeInByte = 0;
//sVertexShaderPDSProgram.asStreams[0].ui32Shift = 0;
sVertexShaderPDSProgram.asStreams[0].ui32Stride = (4*RTT_VERTEX_SIZE);
sVertexShaderPDSProgram.asElements[0].ui32Offset = 0;
sVertexShaderPDSProgram.asElements[0].ui32Size = (4*RTT_VERTEX_SIZE);
sVertexShaderPDSProgram.asElements[0].ui16Register = 0;
sVertexShaderPDSProgram.asElements[0].ui16ComponentSize = 0;
if (bTestVertexHWR)
{
sUSCDevVAddr.uiAddr = HWR_PAGE_FAULT_ADDRESS;
} else if (bTestOverrunVS) {
rtt_GetBufferDevVAddr(psVertexUSCBuffer, &sUSCDevVAddr);
memcpy(pui32VertexUSCBuffer, g_pszRtt_infinite_loop, sizeof(g_pszRtt_infinite_loop));
pui32VertexUSCBuffer += (sizeof(g_pszRtt_infinite_loop)/sizeof(IMG_UINT32));
} else {
rtt_GetBufferDevVAddr(psVertexUSCBuffer, &sUSCDevVAddr);
memcpy(pui32VertexUSCBuffer, g_pszRtt_vertex_shader, sizeof(g_pszRtt_vertex_shader));
pui32VertexUSCBuffer += (sizeof(g_pszRtt_vertex_shader)/sizeof(IMG_UINT32));
}
sVertexShaderPDSProgram.sUSCExecAddr = sUSCDevVAddr;
sVertexShaderPDSProgram.ui32NumUSCTemps = g_ui32Rtt_vertex_shaderTemporaryRegCount;
/* Set the Data and code sizes required for the program */
if (!PDSGENVertexShaderCode (psDCECompControl->psPSCContext, &sVertexShaderPDSProgram))
{
sutu_fail_if_error(PVRSRV_ERROR_UNABLE_TO_COMPILE_PDS);
}
PVR_ASSERT(sVertexShaderPDSProgram.psPSCOutput->ui32CodeSizeInDW > 0);
PVR_ASSERT(sVertexShaderPDSProgram.psPSCOutput->ui32DataSizeInDW > 0);
//psVertexPDSBuffer->psCodeBlock->sDevVirtAddr=0xda0001c400, 减去了psVertexPDSBuffer->sHeapBaseDevAddr.uiAddr=0xda00000000,
//psDCECompControl->sPDSVertexShaderDataAddr=0x1c400
rtt_GetBufferDevVAddr(psVertexPDSBuffer, &psDCECompControl->sPDSVertexShaderDataAddr);
pui32VertexPDSBuffer = PDSGENVertexShaderDataSegment(&sVertexShaderPDSProgram, pui32VertexPDSBuffer);//将0x8000e54018写入pui32VertexPDSBuffer
rtt_UpdateCurrentOffset(psVertexUSCBuffer, pui32VertexUSCBuffer, pui32VertexUSCBufferInit);
rtt_UpdateCurrentOffset(psVertexPDSBuffer, pui32VertexPDSBuffer, pui32VertexPDSBufferInit);
pui32VertexPDSBuffer = pui32VertexPDSBufferInit = rtt_GetCurrentCPUVAddr(psVertexPDSBuffer);
memcpy(pui32VertexPDSBuffer,
sVertexShaderPDSProgram.psPSCOutput->pbyCode,
sVertexShaderPDSProgram.psPSCOutput->ui32CodeSizeInDW << 2);
pui32VertexPDSBuffer += (sVertexShaderPDSProgram.psPSCOutput->ui32CodeSizeInDW);
rtt_UpdateCurrentOffset(psVertexPDSBuffer, pui32VertexPDSBuffer, pui32VertexPDSBufferInit);
/* Cache PDS data/temp size for the state update words */
psDCECompControl->ui32DataSize = sVertexShaderPDSProgram.psPSCOutput->ui32DataSizeInDW << 2;
psDCECompControl->ui32TempSize = sVertexShaderPDSProgram.psPSCOutput->ui32TempSizeInDW << 2;
/* Free the compiled PDS vertex shader output */
PDSGENCleanPSCOutput(psDCECompControl->psPSCContext, sVertexShaderPDSProgram.psPSCOutput);
}
其中模拟page fault时,运行的如下代码的if分支:
if (bTestHWRVSPDSFetch)
{
sVertexShaderPDSProgram.asStreams[0].uAddress.uiAddr = HWR_PAGE_FAULT_ADDRESS;
}
else
{
sVertexShaderPDSProgram.asStreams[0].uAddress.uiAddr = psVerticesDevVAddr->uiAddr; //psVerticesDevVAddr->uiAddr=0x8000e54018
}
报page fault错误时的设备虚拟地址为0xFFBABABAB0即HWR_PAGE_FAULT_ADDRESS。因为模拟时和正常运行时报错相同,所以怀疑正常运行时可能和上面的else分支代码相关(正常运行是走的else分支代码)。
编译debug版本的代码,使用gdb调试后发现sVertexShaderPDSProgram.asStreams[0].uAddress.uiAddr的值为0x8000e54018,与page fault时的设备虚拟地址0x1100e54080的低32位很接近。为了验证是否和这个值有关系,通过gdb调试rgx_triangle_test,使用set命令将sVertexShaderPDSProgram.asStreams[0].uAddress.uiAddr的值修改为0x8000e44018,出现page fault时报的设备虚拟地址就变成0x1100e44080,所以基本确定和该变量有关系。
设备虚拟地址为什么错误
调试rgx_triangle_test
从上面分析可知rgx_triangle_test设置的设备虚拟地址为0x8000e54018,但是报page fault的设备虚拟地址变为了0x1100e54080,低8位置不一样可能时进行了一些偏移计算,而高8位不应该不同,查看rgxheapconfig.h可知,高8位0x80是属于GENERAL HEAP,与代码流程符合。
继续分析rgx_triangle_test是如何将该值写入GDDR的,发现函数rtt_WriteVertexShaderCode会调用函数PDSGENVertexShaderDataSegment将改值写入GDDR,函数PDSGENVertexShaderDataSegment代码如下:
IMG_INTERNAL IMG_UINT32 *PDSGENVertexShaderDataSegment(PDSGEN_VERTEX_SHADER_PROGRAM *psProgram,
IMG_UINT32 *pui32Data)
{
IMG_UINT32 i;
PPSC_OUTPUT psOutput = psProgram->psPSCOutput;
/* PDS code has not generated */
if(psOutput == NULL)
{
return NULL;
}
#if defined(PDUMP)
/* Initialize unused parts of the data segment. */
memset(pui32Data, 0xcc, psOutput->ui32DataSizeInDW * sizeof(IMG_UINT32));
#endif /* defined(PDUMP) */
for (i = 0; i < psOutput->ui32ConstCount; i++)
{
PPSC_CONST_LOAD psConst = &(psOutput->psConstLoads[i]);
IMG_PUINT32 pui32Dest = &(pui32Data[psConst->ui16DestOffset / sizeof(IMG_UINT32)]);
switch (psConst->eLoad)
{
case PSC_CONST_IMMEDIATE32:
*pui32Dest = psConst->uData.ui32ImmValue;
break;
case PSC_CONST_IMMEDIATE64:
{
IMG_PUINT64 pui64Dest = (IMG_PUINT64) pui32Dest;
*pui64Dest = psConst->uData.ui64ImmValue;
break;
}
case PSC_CONST_CONSTANT32:
{
IMG_UINT32 ui32Value = 0;
/* USC shader base address */
if (psConst->uData.sConst32.ui32Constant == PSC_CONST_EXEC0_ADDR)
{
ui32Value = (IMG_UINT32)psProgram->sUSCExecAddr.uiAddr;
}
/* Constant for Modifier for Adjusting InstanceID */
else if (psConst->uData.sConst32.ui32Constant == PSC_CONST_INSTANCEID_MODIFIER)
{
ui32Value = (IMG_UINT32)psProgram->ui32InstanceIDModifier;
}
else
{
FF_ERROR("PDSGENVertexShaderDataSegment: Unknown 32bit PDS const");
}
PDSGEN_32BIT_CONST_ADJUST(ui32Value, psConst);
*pui32Dest = ui32Value;
break;
}
#if defined(RGX_FEATURE_ROBUST_BUFFER_ACCESS)
case PSC_CONST_VTX_BUFFER_DESCRIPTOR:
{
IMG_UINT64 ui64Value0, ui64Value1;
IMG_PUINT64 pui64Dest = (IMG_PUINT64) pui32Dest;
IMG_UINT32 ui32VB = psConst->uData.sVtxBufDescriptor.ui32VertexBufferBindingIndex;
ui64Value0 = psProgram->asStreams[ui32VB].uAddress.uiAddr;
ui64Value1 = psProgram->asStreams[ui32VB].ui32BufferSizeInByte;
pui64Dest[0] = ui64Value0;
pui64Dest[1] = ui64Value1;
break;
}
#else
case PSC_CONST_VTX_BUFFER_SIZE:
{
IMG_UINT64 ui64Value = 0;
IMG_PUINT64 pui64Dest = (IMG_PUINT64) pui32Dest;
IMG_UINT32 ui32VB = psConst->uData.sVtxBufSize.ui32VertexBufferIndex;
ui64Value = psProgram->asStreams[ui32VB].ui32BufferSizeInByte;
PDSGEN_64BIT_CONST_ADJUST(ui64Value, psConst);
*pui64Dest = ui64Value;
break;
}
#endif
case PSC_CONST_VTX_BUFFER_ADDR:
{
IMG_UINT64 ui64Value = 0;
IMG_PUINT64 pui64Dest = (IMG_PUINT64) pui32Dest;
IMG_UINT32 ui32VB = psConst->uData.sVtxBufAddr.ui32VertexBufferIndex;
PVR_ASSERT(ui32VB != PSC_CONST_VTXBUF_OOB_BUFFER);
ui64Value = psProgram->asStreams[ui32VB].uAddress.uiAddr;
PDSGEN_64BIT_CONST_ADJUST(ui64Value, psConst);
*pui64Dest = ui64Value;
break;
}
case PSC_CONST_CONSTANT64:
{
/*
* Only vertex buffer addresses are supported
*/
if (psConst->uData.sConst64.ui32Constant >= PSC_CONST_VB0_ADDR &&
psConst->uData.sConst64.ui32Constant < PSC_CONST_VB_ADDR_MAX)
{
IMG_UINT64 ui64Value = 0;
IMG_PUINT64 pui64Dest = (IMG_PUINT64) pui32Dest;
IMG_UINT32 ui32VB = psConst->uData.sConst64.ui32Constant - PSC_CONST_VB0_ADDR;
ui64Value = psProgram->asStreams[ui32VB].uAddress.uiAddr;
PDSGEN_64BIT_CONST_ADJUST(ui64Value, psConst);
*pui64Dest = ui64Value;
}
else
{
FF_ERROR("PDSGENVertexShaderDataSegment: Unknown 64bit PDS const");
}
break;
}
case PSC_CONST_VTX_BUFFER_STRIDE:
case PSC_CONST_INVALID:
break;
}
}
return pui32Data + psProgram->psPSCOutput->ui32DataSizeInDW;
}
其中case PSC_CONST_VTX_BUFFER_ADDR中的代码*pui64Dest = ui64Value;是将0x8000e54018写入GDDR(pui64Dest是GDDR映射到cpu的应用层虚拟地址)。使用gdb单步调试运行语句*pui64Dest = ui64Value发现ui64Value的值为0x8000e54018,没有错误,且rgx_triangle_test正常出图像了。如果gdb使用n命令运行函数PDSGENVertexShaderDataSegment而不step进入函数PDSGENVertexShaderDataSegment,则又会出现page fault。所以在语句*pui64Dest = ui64Value;前加入printf将ui64Value的值打印出来,发现gdb使用n命令不step进入函数PDSGENVertexShaderDataSegment时,打印出来的ui64Value还是0x8000e54018。
使用busybox devmem继续分析
为了查看写入GDDR后,GDDR上的值到底是多少,使用busybox devmem读取GDDR物理内存的值。为了找到物理地址为多少,分析代码,函数rtt_WriteVertexShaderCode调用PDSGENVertexShaderDataSegment前调用了函数rtt_GetBufferDevVAddr,函数rtt_GetBufferDevVAddr代码如下:
void rtt_GetBufferDevVAddr(RTT_DEVMEM_BUFFER *psMemBuffer,
IMG_DEV_VIRTADDR *psDevVAddr)
{
RTT_CHECK_BUFFER_OVERRUN(psMemBuffer);
if (psMemBuffer->psMemInfo)
{
psDevVAddr->uiAddr = psMemBuffer->psMemInfo->sDevVirtAddr.uiAddr +
psMemBuffer->ui32CurrentOffset;
}
else
{
psDevVAddr->uiAddr = psMemBuffer->psCodeBlock->sDevVirtAddr.uiAddr +
psMemBuffer->ui32CurrentOffset;
}
if (psMemBuffer->bHeapRelativeDevAddr)
{
/* device address relative to heap base */
psDevVAddr->uiAddr -= psMemBuffer->sHeapBaseDevAddr.uiAddr;
}
}
从该函数可以获取psVertexPDSBuffer的设备虚拟地址为0xda0001c400,通过在KMD函数MMU_MapPages和MMU_MapPMRFast中加入的GVA到GPA的映射调试信息,可以知道psVertexPDSBuffer的设备虚拟地址被映射的设备物理地址,调试信息如下,GDDR的设备物理地址为40位,起始地址为0x0100000000,即设备物理地址0x0103658000中0x3658000为GDDR相对于起始地址的偏移:
[ 240.920739] MMU_MapPMRFast 3509 sDevVAddr=0xda0001c000 sDevPAddr=0x0103658000 uiPageSize=0x4000
注:LoongArch的kernel的page size设置的为16KB。
而该设备物理地址是GPU看到的物理地址,busybox devmem需要使用CPU端的mmio物理地址。使用如下命令可以获取mmio起始地址为0xe0180000000:
05:00.0 VGA compatible controller: Xiangdixian Computing Technology (Chongqing) Ltd. Device 1330 (rev 01) (prog-if 00 [VGA controller])
Subsystem: Xiangdixian Computing Technology (Chongqing) Ltd. Device 0000
Flags: bus master, fast devsel, latency 0, IRQ 46, NUMA node 0
Memory at e0020000000 (64-bit, non-prefetchable) [size=32M]
Memory at e0180000000 (64-bit, prefetchable) [size=2G] //GDDR mmio起始地址
Memory at e0140000000 (64-bit, prefetchable) [size=16M]
Expansion ROM at e0024000000 [disabled] [size=1M]
Capabilities: [80] Power Management version 3
Capabilities: [90] MSI: Enable- Count=1/1 Maskable+ 64bit+
Capabilities: [b0] MSI-X: Enable+ Count=32 Masked-
Capabilities: [c0] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [150] Device Serial Number 00-00-00-00-00-00-00-00
Capabilities: [160] Power Budgeting <?>
Capabilities: [180] Resizable BAR <?>
Capabilities: [1b8] Latency Tolerance Reporting
Capabilities: [1c0] Dynamic Power Allocation <?>
Capabilities: [300] Secondary PCI Express
Capabilities: [440] Process Address Space ID (PASID)
Capabilities: [900] L1 PM Substates
Capabilities: [910] Data Link Feature <?>
Capabilities: [920] Lane Margining at the Receiver <?>
Capabilities: [9c0] Physical Layer 16.0 GT/s <?>
Kernel driver in use: xdxgpu
Kernel modules: xdxgpu
通过0xe0180000000+偏移0x3658000就为GPU mmio的CPU物理地址,该地址就可以通过busybox devmem进行访问。将该地址再加上psVertexPDSBuffer的设备虚拟地址0xda0001c400的页内偏移0x400,最后加上函数PDSGENVertexShaderDataSegment里因为先写了其他数据到pui32Dest而产生的偏移0x8,最后得出CPU端物理地址0xe0180000000+0x3658000+0x400+0x8=0xe0183658408.
当使用非单步(gdb使用n命令运行PDSGENVertexShaderDataSegment而不step进入函数PDSGENVertexShaderDataSegment)时,发现高32位数据(地址0xE018365840c)并未成功写入,而是与写入之前的数据一样,结果如下:
root@cqjcdl:~# busybox devmem 0xE0183658408
0x00E54018
root@cqjcdl:~# busybox devmem 0xE018365840c
0xFF080311
使用gdb单步调试运行语句*pui64Dest = ui64Value发现64位数据都成功写入了,结果如下:
root@cqjcdl:~# busybox devmem 0xE0183658408
0x00E54018
root@cqjcdl:~# busybox devmem 0xE018365840c
0x00000080
修改cache方式
经过上面分析,怀疑和cache模式i相关。在缺页处理函数中(在内核5.4.18-77-generic中,映射gem object的函数drm_gem_mmap_obj并未进行实际的映射,而是设置vma->vm_ops,然后缺页时再调用缺页处理函数vma->vm_ops->fault即函数xdx_gem_object_vm_fault进行映射;高版本内核在函数drm_gem_mmap_obj中会直接映射好),将cache模式设置为uncache模式进行验证,修改如下:
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 11, 0))
static vm_fault_t xdx_gem_object_vm_fault(struct vm_fault *vmf)
#else
static int xdx_gem_object_vm_fault(struct vm_area_struct *vma,
struct vm_fault *vmf)
#endif
{
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 11, 0))
struct vm_area_struct *vma = vmf->vma;
#endif
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0))
unsigned long vaddr = vmf->address;
#else
unsigned long vaddr = (unsigned long)vmf->virtual_address;
#endif
struct drm_gem_object *gobj = vma->vm_private_data;
struct xdx_bo *bo = to_xdx_gem(gobj);
struct xdx_device *xdev = drm_to_xdev(bo->base.dev);
unsigned long page_offset, pfn;
unsigned long addr;
int ret = 0;
if (bo->pro.placement != XDXGPU_PL_VRAM &&
bo->pro.placement != XDXGPU_PL_TT) {
dev_err(xdev->dev, "no support heap %d\n", bo->pro.placement);
return VM_FAULT_SIGSEGV;
}
if (!get_pages(bo)) {
dev_err(xdev->dev, "no pages\n");
return VM_FAULT_NOPAGE;
}
page_offset = ((vaddr - vma->vm_start) >>
PAGE_SHIFT) + vma->vm_pgoff - drm_vma_node_start(&bo->base.vma_node);
ret = pvr_pmr_get_cpu_paddr(bo->psPMR, (page_offset << PAGE_SHIFT), &addr);
if (ret)
return ret;
pfn = addr >> PAGE_SHIFT;
+#if defined(__loongarch64)
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+#endif
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 20, 0))
return vmf_insert_pfn(vma, vaddr, pfn);
#else
{
int err;
err = vm_insert_pfn(vma, vaddr, pfn);
switch (err) {
case 0:
case -EBUSY:
return VM_FAULT_NOPAGE;
case -ENOMEM:
return VM_FAULT_OOM;
default:
return VM_FAULT_SIGBUS;
}
}
#endif
}
修改后验证,在非单步(gdb使用n命令运行PDSGENVertexShaderDataSegment而不step进入函数PDSGENVertexShaderDataSegment)运行时,高32位也成功写入了GDDR。
各个平台cache模式分析
在x86、arm64、loongarch64上,使用的同样的代码,为什么在x86、arm64上没有该问题呢?莫非各个平台cache模式有差异?
x86平台
系统及内核版本如下:
root@test-System-Product-Name:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.1 LTS
Release: 20.04
Codename: focal
root@test-System-Product-Name:~# uname -a
Linux test-System-Product-Name 5.4.0-42-generic #3 SMP Fri Feb 10 23:06:03 CST 2023 x86_64 x86_64 x86_64 GNU/Linux
三个平台的内核版本都是5.4.xx,下面参考的内核代码代码都是5.4.191。内核5.4.xx映射gem object的函数drm_gem_mmap_obj并未进行实际的映射,而是设置vma->vm_ops,然后缺页时再调用缺页处理函数vma->vm_ops->fault即函数xdx_gem_object_vm_fault进行映射;高版本内核在函数drm_gem_mmap_obj中会直接映射好,函数drm_gem_mmap_obj的代码如下:
int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
struct vm_area_struct *vma)
{
struct drm_device *dev = obj->dev;
/* Check for valid size. */
if (obj_size < vma->vm_end - vma->vm_start)
return -EINVAL;
if (obj->funcs && obj->funcs->vm_ops)
vma->vm_ops = obj->funcs->vm_ops;
else if (dev->driver->gem_vm_ops)
vma->vm_ops = dev->driver->gem_vm_ops;
else
return -EINVAL;
vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
vma->vm_private_data = obj;
vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
/* Take a ref for this mapping of the object, so that the fault
* handler can dereference the mmap offset's pointer to the object.
* This reference is cleaned up by the corresponding vm_close
* (which should happen whether the vma was created by this call, or
* by a vm_open due to mremap or partial unmap or whatever).
*/
drm_gem_object_get(obj);
return 0;
}
代码vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));将vma->vm_page_prot设置为write combine。触发缺页异常后在函数xdx_gem_object_vm_fault中进行映射,有如下调用流程:
xdx_gem_object_vm_fault
->vmf_insert_pfn
->vmf_insert_pfn_prot
->track_pfn_insert 非x86该函数为空
->lookup_memtype
->__pgprot
->insert_pfn
在非x86平台,函数track_pfn_insert为空;在x86平台,若定义了宏__HAVE_PFNMAP_TRACKING则不为空,代码如下:
void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, pfn_t pfn)
{
enum page_cache_mode pcm;
if (!pat_enabled())
return;
/* Set prot based on lookup */
pcm = lookup_memtype(pfn_t_to_phys(pfn));
*prot = __pgprot((pgprot_val(*prot) & (~_PAGE_CACHE_MASK)) |
cachemode2protval(pcm));
}
其中函数pat_enabled返回ture,函数lookup_memtype通过物理地址(需要映射的物理页的起始地址)查找相应的page cache mode,其相关代码如下:
static enum page_cache_mode lookup_memtype(u64 paddr)
{
enum page_cache_mode rettype = _PAGE_CACHE_MODE_WB;
struct memtype *entry;
if (x86_platform.is_untracked_pat_range(paddr, paddr + PAGE_SIZE))
return rettype;
if (pat_pagerange_is_ram(paddr, paddr + PAGE_SIZE)) {
struct page *page;
page = pfn_to_page(paddr >> PAGE_SHIFT);
return get_page_memtype(page);
}
spin_lock(&memtype_lock);
entry = rbt_memtype_lookup(paddr);
if (entry != NULL)
rettype = entry->type;
else
rettype = _PAGE_CACHE_MODE_UC_MINUS;
spin_unlock(&memtype_lock);
return rettype;
}
struct memtype *rbt_memtype_lookup(u64 addr)
{
return memtype_rb_lowest_match(&memtype_rbroot, addr, addr + PAGE_SIZE);
}
最终会从红黑树memtype_rbroot中查找page cache mode。
那么查找的结果是什么呢?在KMD(GPU kernel mode driver)的初始化函数xdx_device_init中,有如下代码与调用关系:
int xdx_device_init(struct xdx_device *xdev)
{
struct drm_device *ddev = xdev_to_drm(xdev);
struct pci_dev *pdev = xdev->pdev;
int ret;
xdx_device_module_param_handle(xdev);
/* Get CPU visable VRAM */
xdev->aper_base = pci_resource_start(xdev->pdev, 2);
xdev->aper_size = pci_resource_len(xdev->pdev, 2);
xdev->visible_vram_size = xdev->aper_size;
/* reserve PAT memory space to WC for VRAM */
arch_io_reserve_memtype_wc(xdev->aper_base, xdev->aper_size);
/* Add an MTRR for the VRAM */
xdev->vram_mtrr = arch_phys_wc_add(xdev->aper_base, xdev->aper_size);
......
}
arch_io_reserve_memtype_wc 非x86平台为空函数
->io_reserve_memtype
->reserve_memtype
->rbt_memtype_check_insert
->memtype_rb_insert(&memtype_rbroot, new);
int arch_phys_wc_add(unsigned long base, unsigned long size)
{
int ret;
if (pat_enabled() || !mtrr_enabled())
return 0; /* Success! (We don't need to do anything.) */
ret = mtrr_add(base, size, MTRR_TYPE_WRCOMB, true);
if (ret < 0) {
pr_warn("Failed to add WC MTRR for [%p-%p]; performance may suffer.",
(void *)base, (void *)(base + size - 1));
return ret;
}
return ret + MTRR_TO_PHYS_WC_OFFSET;
}
在非x86平台函数arch_io_reserve_memtype_wc返回空;在x86平台且定义了CONFIG_X86_PAT,则会有以上函数调用关系,最终将GDDR的起始地址到结束地址的区间加入了memtype_rbroot且page cache mode为write combine(WC)。在非x86平台函数arch_phys_wc_add返回空;在x86平台且定义了CONFIG_MTRR则代码如上图所示。因为函数pat_enabled返回true,所以该函数直接返回了。
根据上面分析,函数lookup_memtype查找的page cache mode结果为write combine。随后,函数track_pfn_insert调用以下语句清除PTE的PAT、PCD、PWT位,然后再设置为page cache mode write combine对应的PTE位(PAT、PCD、PWT位),最后将page protection bits赋值给传出参数prot。PAT相关的分析见我写的另一篇文章《x86 PAT原理》。
*prot = __pgprot((pgprot_val(*prot) & (~_PAGE_CACHE_MASK)) |
cachemode2protval(pcm));
#define _PAGE_CACHE_MASK (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT)
随后函数vmf_insert_pfn_prot调用insert_pfn设置PTE,其中PTE中的page protection bits使用的是track_pfn_insert获取的page protection bits。所以函数vmf_insert_pfn并未使用函数drm_gem_mmap_obj中设置的vma->vm_page_prot。
通过上述分析可知,x86使用的cache模式为write combine。
以下为文档《IA-32 Intel ArchitectureSoftware Developer’s Manual Volume 3:System Programming Guide》的引用:
Write Combining (WC)—System memory locations are not cached (as with uncacheable memory) and coherency is not enforced by the processor’s bus coherency protocol. Speculative reads are allowed. Writes may be delayed and combined in the writecombining buffer (WC buffer) to reduce memory accesses. If the WC buffer is partiallyfilled, the writes may be delayed until the next occurrence of a serializing event; such as,an SFENCE or MFENCE instruction, CPUID execution, a read or write to uncachedmemory, an interrupt occurrence, or a LOCK instruction execution. This type of cachecontrol is appropriate for video frame buffers, where the order of writes is unimportant aslong as the writes update memory so they can be seen on the graphics display. See Section10.3.1., “Buffering of Write Combining Memory Locations”, for more information aboutcaching the WC memory type. This memory type is available in the Pentium Pro andPentium II processors by programming the MTRRs or in the Pentium III , Pentium 4, andIntel Xeon processors by programming the MTRRs or by selecting it through the PAT.
文档指出,即使WC buffer没有满,但是如果后面运行了SFENCE/MFENCE/CPUID指令、读写了uncachded memory、发生中断等就会触发WC buffer写操作。
KMD在向GPU meta firmware发送KCCB命令时,会调用函数RGXSendCommandRaw,该函数调用关系如下,在x86_64平台最终会调用指令sfence,使未满的WC buffer里的数据能够写到物理内存,而调用KMD函数RGXSendCommandRaw是在调用UMD函数PDSGENVertexShaderDataSegment之后,所以不会存在WC buffer未写到物理内存上的情况:
RGXSendCommandRaw
->OSWriteMemoryBarrier
->wmb 在x86_64上实现如下
->#define wmb() asm volatile("sfence" : : : "memory")
arm64平台
系统及内核版本如下:
root@cqjcdl-pc:~# lsb_release -a
No LSB modules are available.
Distributor ID: Kylin
Description: Kylin V10 SP1
Release: v10
Codename: kylin
root@cqjcdl-pc:~# uname -a
Linux cqjcdl-pc 5.4.18-75-generic #64-KYLINOS SMP Sat Oct 22 03:27:37 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
root@cqjcdl-pc:~#
对于arm64平台,函数track_pfn_insert、arch_io_reserve_memtype_wc、arch_phys_wc_add都为空函数,且根据如下代码可知,vmf_insert_pfn使用page protection bits就是函数drm_gem_mmap_obj中设置的vma->vm_page_prot,所以arm64平台使用的cache模式也是write combine。
vm_fault_t vmf_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
unsigned long pfn)
{
return vmf_insert_pfn_prot(vma, addr, pfn, vma->vm_page_prot);
}
vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
unsigned long pfn, pgprot_t pgprot)
{
/*
* Technically, architectures with pte_special can avoid all these
* restrictions (same for remap_pfn_range). However we would like
* consistency in testing and feature parity among all, so we should
* try to keep these invariants in place for everybody.
*/
BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)));
BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) ==
(VM_PFNMAP|VM_MIXEDMAP));
BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags));
BUG_ON((vma->vm_flags & VM_MIXEDMAP) && pfn_valid(pfn));
if (addr < vma->vm_start || addr >= vma->vm_end)
return VM_FAULT_SIGBUS;
if (!pfn_modify_allowed(pfn, pgprot))
return VM_FAULT_SIGBUS;
track_pfn_insert(vma, &pgprot, __pfn_to_pfn_t(pfn, PFN_DEV));
return insert_pfn(vma, addr, __pfn_to_pfn_t(pfn, PFN_DEV), pgprot,
false);
}
loongArch64平台
loongArch64平台与arm64平台类似,所以代码修改前cache模式也是使用的write combine。
查找根因
联系龙芯工程师,经过交流龙芯工程师提供了刷WC buffer的指令如下,即'dbar 0':
wmb
->fast_wmb()
->__sync()
-> #define __sync() __asm__ __volatile__("dbar 0" : : : "memory")
将变量修改为立即数排除其他影响,并加入内敛汇编代码,修改如下:
IMG_INTERNAL IMG_UINT32 *PDSGENVertexShaderDataSegment(PDSGEN_VERTEX_SHADER_PROGRAM *psProgram,
IMG_UINT32 *pui32Data)
{
......
case PSC_CONST_VTX_BUFFER_ADDR:
{
IMG_UINT64 ui64Value = 0;
IMG_PUINT64 pui64Dest = (IMG_PUINT64) pui32Dest;
IMG_UINT32 ui32VB = psConst->uData.sVtxBufAddr.ui32VertexBufferIndex;
PVR_ASSERT(ui32VB != PSC_CONST_VTXBUF_OOB_BUFFER);
ui64Value = psProgram->asStreams[ui32VB].uAddress.uiAddr;
PDSGEN_64BIT_CONST_ADJUST(ui64Value, psConst);
- //*pui64Dest = ui64Value;
+ *pui64Dest = 0x8000E54018;
+ __asm__ __volatile__(
+ "dbar 0\n\t"
+ : /* no output */
+ : /* no input */
+ : "memory");
+
break;
}
......
}
测试发现修改无效,测试结果与修改前一样。
为了验证数据是否下发到PCIE设备,接上PCIE协议分析仪抓取数据,所有下发到PCIE设备的数据都会经过PCIE协议分析仪。当使用gdb命令s单步调试运行语句*pui64Dest = ui64Value时抓取到了高32位和低32位数据;当使用gdb命令n运行函数PDSGENVertexShaderDataSegment而不step进入函数PDSGENVertexShaderDataSegment时,只抓取到了低32位数据,未抓取到高32位数据。
后面龙芯工程师与其他工程师沟通后,发现之前其他GPU厂商使用同样的平台也遇到过该问题,该问题是龙芯芯片的write combine有问题,龙芯后续会解决。其他GPU厂商也是先修改为uncache模式。