Monitoring performance of Linux automatic memory migration between NUMA nodes

本文探讨了Linux系统中非统一内存访问(NUMA)架构下自动内存迁移的工作原理及性能影响。通过Performance Co-Pilot(PCP)工具,监测不同节点间的页故障和迁移情况,分析了自动迁移机制对延迟敏感应用的潜在负面影响,并提供了禁用自动迁移的策略。

 

Monitoring performance of Linux automatic memory migration between NUMA nodes

Updated 2018年四月6日04:08 - 

English 

Non-Uniform Memory Access (NUMA) is used on larger computer system to allow the machine to have enough aggregate memory bandwidth for the many processors in the system. The downside of using NUMA is that placement of tasks and memory becomes a concern as a processor's access to memory on other NUMA nodes is slower than accessing memory on the local NUMA node. Processes may be migrated between nodes to improve processor utilization and memory may be allocated on nodes other than the node with the CPU a task is currently running on. The kernel's automatic NUMA balancing can migrate pages of memory to be closer to the processors using the memory. However, there is overhead associated with NUMA balancing and there are cases where the automatic memory migration could hurt performance (for example latency sensitive applications).

How Linux NUMA automatic page migration mechanism works

On modern machines there are two types of addresses: virtual and physical. The virtual addresses used by programs are mapped to physical memory located on various NUMA nodes in the machine. With a change of the mapping a virtual address can be addressing physical memory in a different NUMA node. To help determine whether a page of memory should be moved the kernel removes the virtual to physical mapping for some regions of memory (pages). When a program attempts to access a virtual address that is not mapped to physical memory the memory access triggers a page fault exception. The kernel then can determine which processor is attempting to access the page and whether the page is on the same NUMA node as the processor or on a remote NUMA node. If the memory page that the processor is attempting to access is on a remote NUMA node, the kernel can migrate the page from the remote NUMA node to the local NUMA node, causing later accesses from that CPU to that page to have lower latency.

Performance Co-Pilot (PCP) monitoring of page migrations

Performance Co-Pilot (PCP) allows access to the page metrics provided by the Linux kernel. The configurable pmrep tool available in PCP can be set up to monitor the page faults used to trigger the page migrations with the following configuration:

Raw

[numa-hint-faults]
header = yes
unitinfo = no
globals = no
timestamp = yes
width = 15
precision = 2
delimiter = " "
mem.vmstat.numa_hint_faults = faults/s,,,,
mem.vmstat.numa_hint_faults_local = faults_local/s,,,,
local = mem.vmstat.numa_hint_faults_local_percent
local.label = %%local
local.formula = 100 *
  (rate(mem.vmstat.numa_hint_faults)
  ?
    rate(mem.vmstat.numa_hint_faults_local)/rate(mem.vmstat.numa_hint_faults)
  :
    mkconst(1, type="double", semantics="instant") )
local.width = 7
faults_remote = mem.vmstat.numa_hint_faults_remote
faults_remote.formula = mem.vmstat.numa_hint_faults - mem.vmstat.numa_hint_faults_local
faults_remote.label = faults_remote/s
remote = mem.vmstat.numa_hint_faults_remote_percent
remote.formula = 100 *
  (rate(mem.vmstat.numa_hint_faults)
  ?
    (1 - rate(mem.vmstat.numa_hint_faults_local)/rate(mem.vmstat.numa_hint_faults))
  :
    mkconst(0, type="double", semantics="instant") )
remote.label = %%remote
remote.width = 7

Below is the output of the pmrep monitoring the hinting pages faults with the above pmrep configuration stored as numa-hint-faults.conf. The first column is the local time. By default a new measurement is taken every second and a new row of output is produced. The second column is the rate of all hinting page faults. The third and fourth columns give the rate of local hinting page faults and the percentage of the total hinting faults are local. The local hinting faults are page faults where the processor that triggered the hinting page fault and the memory it referred to are on the same NUMA node. The last two columns are the number of remote hinting faults and the percentage of the hinting faults that are remote. The remote hinting faults are page faults where the processor that triggered the hinting page fault and the memory it referred to are on different NUMA nodes. For the example below the percentage of remote hinting page faults is very low, most have no remote hinting page faults and the highest percentage of remote hinting faults was 2.93%.

Raw

$ pmrep -c ~/numa-hint-faults.conf :numa-hint-faults
                faults/s  faults_local/s  %local faults_remote/s %remote
12:36:02             N/A             N/A     N/A             N/A     N/A
12:36:03           48.88           48.88  100.00            0.00    0.00
12:36:04           18.00           18.00  100.00            0.00    0.00
12:36:05           20.01           20.01  100.00            0.00    0.00
12:36:06         2157.61         2155.61   99.91            2.00    0.09
12:36:07          340.97          330.97   97.07           10.00    2.93
12:36:08          184.03          184.03  100.00            0.00    0.00
12:36:09          401.18          401.18  100.00            0.00    0.00
12:36:10           42.98           42.98  100.00            0.00    0.00
12:36:11          360.99          360.99  100.00            0.00    0.00

When tasks get assigned to different nodes via taskset command while the task's associated memory have not been migrated the percentage of remote hinting page faults increase dramatically as seen in the example below.

Raw

$ pmrep -c ~/numa-hint-faults.conf :numa-hint-faults
                faults/s  faults_local/s  %local faults_remote/s %remote
12:38:09             N/A             N/A     N/A             N/A     N/A
12:38:10           47.91            4.99   10.42           42.92   89.58
12:38:11           46.97            0.00    0.00           46.97  100.00
12:38:12          342.25          118.09   34.50          224.17   65.50
12:38:13          215.02            1.00    0.47          214.02   99.53
12:38:14           76.00            1.00    1.32           75.00   98.68
12:38:15           41.00            0.00    0.00           41.00  100.00
12:38:16          264.00            4.00    1.52          260.00   98.48
12:38:17          185.98          113.99   61.29           71.99   38.71
12:38:18           17.00            7.00   41.18           10.00   58.82

PCP can also monitor the actual migrations, the number of pages successfully migrated, the number of pages that migration was attempted on but failed, and estimate the average amount of bandwidth used per node. Below is a pmrep configuration to display this information.

Raw

[numa-pgmigrate-per-node]
header = yes
unitinfo = no
globals = no
timestamp = yes
width = 15
precision = 3
delimiter = " "
node_bw = mem.vmstat.numa_bandwidth
node_bw.label = MB/s/node
node_bw.formula = rate(mem.vmstat.numa_pages_migrated) *
  hinv.pagesize/hinv.nnode/mkconst(1000000, type="double", semantics="instant")
node_pg = mem.vmstat.numa_pages
node_pg.label = auto pg/s/node
node_pg.formula = rate(mem.vmstat.numa_pages_migrated)/hinv.nnode
node_succ_pg = mem.vmstat.numa_pgmigrate_success
node_succ_pg.label = success/s/node
node_succ_pg.formula = rate(mem.vmstat.pgmigrate_success)/hinv.nnode
node_fail_pg = mem.vmstat.numa_pgmigrate_fail
node_fail_pg.label = fail/s/node
node_fail_pg.formula = rate(mem.vmstat.pgmigrate_fail)/hinv.nnode

Below is an example output showing that the automatic page migration is moving pages in an attempt to co-locate pages with the task using the pages. The first column is the local time. The second column is the estimate of the average bandwidth used per node for page migration. The third column is the average rate pages are automatically migrated for each node. The third and fourth column appear to be very similar, but the success/s/node also includes pages due to explicit page migrations from the migratepages command. The last column shows the number automatic and explicit page migrations that failed. Ideally, every page migration should be successful and the last column should be zero.

Raw

$ pmrep -c ~/numa-pgmigrates.conf :numa-pgmigrate-per-node
               MB/s/node  auto pg/s/node  success/s/node     fail/s/node
14:02:34             N/A             N/A             N/A             N/A
14:02:35           0.004           0.997           0.997           0.000
14:02:36           0.004           1.000           1.000           0.000
14:02:37           0.014           3.503           3.503           0.000
14:02:38           0.000           0.000           0.000           0.000
14:02:39           2.115         516.261         516.261           0.000
14:02:40           0.008           2.000           2.000           0.000
14:02:41           0.131          32.000          32.000           0.000
14:02:42           0.008           2.002           2.002           0.000
14:02:43           0.006           1.500           1.500           0.000

When might automatic NUMA page migration be useful?

The automatic NUMA migration mechanism has a fairly simple algorithm to decided whether a page should be moved. It can help the case where a task is started on one node, allocates memory on that node, and then later the task is moved to another node. The expectation is that the task's threads and memory can fit in a single NUMA node and that the task can tolerate the latency created by the additional hinting page faults and the associated migration of page sized chunks of memory (4KB or 64KB).

When might automatic NUMA page migration not be helpful?

For latency-sensitive applications the additional page faults used to hint where to place pages of memory and the the movement of 4KB or 64KB chunks to migrate a page could add unacceptable delays. The system administrator can disable the page migrations with either:

Raw

# echo 0 > /proc/sys/kernel/numa_balancing

or adding the following to the kernel boot command:

Raw

numa_balancing=disable

The simple algorithm used to determine whether to migrate pages is not going to work so well for applications where the application using the pages requires more threads or more memory than what is available in a single NUMA node. Similarly, if the multiple threads assigned to different nodes frequently modify the pages they share, the pages may often migrate between the different nodes. This can be observed with continued high values in the MB/s/node and auto pg/s/node columns of pmrep using the :numa-pgmigrate-per-node configuration from numa-pgmigrates.conf.

There also has to be some free memory available on the NUMA nodes for the automatic migration to work. If the machine does not have free memory on the node that hinting faults suggests that a page be moved to, the automatic migration cannot move the page there. Then the system has the overhead of the automatic page migration monitoring, but no benefit of the reduced average access time because improved page locality. This can be observed with continued zero or very low values in the success/s/node and high values in the fail/s/node columns of pmrep using the :numa-pgmigrate-per-node configuration from numa-pgmigrates.conf.

潮汐研究作为海洋科学的关键分支,融合了物理海洋学、地理信息系统及水利工程等多领域知识。TMD2.05.zip是一套基于MATLAB环境开发的潮汐专用分析工具集,为科研人员与工程实践者提供系统化的潮汐建模与计算支持。该工具箱通过模块化设计实现了两大核心功能: 在交互界面设计方面,工具箱构建了图形化操作环境,有效降低了非专业用户的操作门槛。通过预设参数输入模块(涵盖地理坐标、时间序列、测站数据等),用户可自主配置模型运行条件。界面集成数据加载、参数调整、可视化呈现及流程控制等标准化组件,将复杂的数值运算过程转化为可交互的操作流程。 在潮汐预测模块中,工具箱整合了谐波分解法与潮流要素解析法等数学模型。这些算法能够解构潮汐观测数据,识别关键影响要素(包括K1、O1、M2等核心分潮),并生成不同时间尺度的潮汐预报。基于这些模型,研究者可精准推算特定海域的潮位变化周期与振幅特征,为海洋工程建设、港湾规划设计及海洋生态研究提供定量依据。 该工具集在实践中的应用方向包括: - **潮汐动力解析**:通过多站点观测数据比对,揭示区域主导潮汐成分的时空分布规律 - **数值模型构建**:基于历史观测序列建立潮汐动力学模型,实现潮汐现象的数字化重构与预测 - **工程影响量化**:在海岸开发项目中评估人工构筑物对自然潮汐节律的扰动效应 - **极端事件模拟**:建立风暴潮与天文潮耦合模型,提升海洋灾害预警的时空精度 工具箱以"TMD"为主程序包,内含完整的函数库与示例脚本。用户部署后可通过MATLAB平台调用相关模块,参照技术文档完成全流程操作。这套工具集将专业计算能力与人性化操作界面有机结合,形成了从数据输入到成果输出的完整研究链条,显著提升了潮汐研究的工程适用性与科研效率。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值