Linux 操作系统下的内存使用
来源于:
Linux System Memory Utilization (文档 ID 1514705.1)
适用于:
Linux OS - Version Oracle Linux 5.0 to Oracle Linux 6.4 [Release OL5 to OL6U4]
Information in this document applies to any platform.
目的:
一些Linux 用户可能会对Linux 内存使用产生混乱:应用程序使用的内存与 free命令报告的内存不匹配
问题和答案:
问题1:怎么阅读free命令的输出
[root@localhost /]# free -tm
total used free shared buffers cached
Mem: 7880 6934 945 0 170 3631
-/+ buffers/cache: 3133 4747
Swap: 8191 0 8191
Total: 16072 6934 9137
为什么这个服务器只有945M的内存剩余?哪些进程消耗了如此多的内存?
回答1;我们能看到在free命令的输出中有5行输出,对于内存使用,我们关心第二行和第三行
回答1.1:从操作系统的观点看:
第二行是从操作系统观点出发的 内存使用情况:
7880MB | Total memory |
6934MB | Used memory, which included the buffers & cached. |
170MB | Memory used for buffers, which was relatively temporary storage for raw disk blocks, shouldn't get tremendously large. |
3631MB | Memory used for cached, which was In-memory cache for files read from the disk (the pagecache, as well as dentry cache, and inode cache) |
The buffer/cache 将会提升I/O性能,一旦任何应用程序需要更多的内存,The buffer/cache 可以被归还给操作系统
注意:一些最近使用的内存通常不会归还给操作系统,除非绝对需要。
[root@szem6 /]# cat /proc/meminfo |grep Active:
Active: 2900528 kB
回答1.2:从应用程序的观点看:
从 'used' memory 中排除掉 'buffers' and 'cached' 之后,我们可以获得从应用程序来看的内存使用情况:
剩余的内存为:
free + buffers + cached:
945 + 170 + 3631 = 4747MB
已经使用的内存为:
used - buffers - cached:
6934 - 170 - 3631 = 3133MB
这正是我们从free的输出的第三行可以看到的:
3133MB used, and 4747MB free.
回答1.3:更精确的:
虽然可以释放 cached memory (详见问题2),但是Linux 内核不会回收哪些最近使用的file的 caches,除非这是需要的。
释放最近使用的file 的caches 可能会导致性能下降(详见:问题3)
在/proc/meminfo中有两个地方:
Active:最近使用的buffer和cache的内存的总和,这通常不会被回收
Inactive:最近没有被使用的buffer和cache的内存的总和,这能被Linux 内核回收。
# cat /proc/meminfo
MemTotal: 8069496 kB
MemFree: 6369944 kB
Buffers: 341724 kB
Cached: 414800 kB
SwapCached: 0 kB
Active: 839852 kB
Inactive: 503996 kB
[SNIP]
因此,你可以认为剩余的内存是:MemFree+ Inactive ,对于一个有良好性能的系统。free + buffers + cached 是底线
问题2:怎么强制释放cached 的内存
一般而言,不鼓励释放cached的内存,除非你有内存上的压力,见;How to Check Whether a System is Under Memory Pressure (Note 1502301.1)
如果你确实像drop掉cached 内存,请follow下面的instruction:
1)flush file system buffers
# sync; sync; sync
2)drop cached memory
根据Linux kernel 文档:writing to /proc/sys/vm/drop_caches will cause the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free.
To free pagecache:
# echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
# echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
# echo 3 > /proc/sys/vm/drop_caches
问题3:cached 的内存 怎么来提高系统性能?
Here's an example will show the difference.
The initial state:
[root@localhost /]$ free -m
total used free shared buffers cached
Mem: 7880 6427 1452 0 438 3321
-/+ buffers/cache: 2667 5213
Swap: 8191 0 8191
Create a new file of 200MB, we can also see the cached increased by 200MB, exactly the same size of the newly create file.
[root@localhost /]$ dd if=/dev/zero of=/tmp/test.img bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.12154 s, 1.7 GB/s
[root@localhost /]$ free -m
total used free shared buffers cached
Mem: 7880 6612 1267 0 438 3512
-/+ buffers/cache: 2661 5219
Swap: 8191 0 8191
With the file cached, do a I/O test on it.
[root@localhost /]$ time cat /tmp/test.img >/dev/null
real 0m0.102s
user 0m0.000s
sys 0m0.038s
Drop the cached memory, so the file was not cached anymore.
[root@localhost /]# echo 3 > /proc/sys/vm/drop_caches
[root@localhost /]# free -m
total used free shared buffers cached
Mem: 7880 2843 5036 0 1 311
-/+ buffers/cache: 2529 5350
Swap: 8191 0 8191
Redo the test, the result was about 20 times slower than the cached.
[root@localhost /]# time cat /tmp/bigfile > /dev/null
real 0m2.016s
user 0m0.001s
sys 0m0.128s
问题4: 怎么获得一个特定的进程使用的内存?
对于一个特定进程,没有一个精确的方法来获得内存消耗
1) Linux would not load the entire resource (VSZ) into memory, but only part of them are residential on the physical memory (RSS).
For example, the Virtual memory size required by bash was 108500KB, but only 1864KB are residential on memory.
[root@localhost /]# ps aux | head -1; ps aux | grep 16222
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 16222 0.0 0.0 108500 1864 pts/13 S 11:33 0:00 -bash
2) Many libraries are shared between processes, they only consume 1 copy of the memory for many different processes.
For example, in the following output, ld-2.12.so, libc-2.12.so, libdl-2.12.so, and libnss_files-2.12.so are shared libraries, which may be used by other processes as well.
[root@localhost /]# pmap -x 16222
16222: -bash
Address Kbytes RSS Dirty Mode Mapping
0000000000400000 848 572 0 r-x-- bash
00000000006d3000 40 40 40 rw--- bash
00000000006dd000 20 20 20 rw--- [ anon ]
00000000008dc000 36 8 0 rw--- bash
00000000008e5000 380 252 252 rw--- [ anon ]
000000367d200000 128 108 0 r-x-- ld-2.12.so
000000367d41f000 4 4 4 r---- ld-2.12.so
000000367d420000 4 4 4 rw--- ld-2.12.so
000000367d421000 4 4 4 rw--- [ anon ]
000000367d600000 1572 568 0 r-x-- libc-2.12.so
000000367d789000 2048 0 0 ----- libc-2.12.so
000000367d989000 16 16 4 r---- libc-2.12.so
000000367d98d000 4 4 4 rw--- libc-2.12.so
000000367d98e000 20 16 16 rw--- [ anon ]
000000367de00000 8 8 0 r-x-- libdl-2.12.so
000000367de02000 2048 0 0 ----- libdl-2.12.so
000000367e002000 4 4 4 r---- libdl-2.12.so
000000367e003000 4 4 4 rw--- libdl-2.12.so
000000368e600000 116 60 0 r-x-- libtinfo.so.5.7
000000368e61d000 2048 0 0 ----- libtinfo.so.5.7
000000368e81d000 16 12 4 rw--- libtinfo.so.5.7
00007ffff1f40000 96836 60 0 r---- locale-archive
00007ffff7dd1000 48 24 0 r-x-- libnss_files-2.12.so
00007ffff7ddd000 2048 0 0 ----- libnss_files-2.12.so
00007ffff7fdd000 4 4 4 r---- libnss_files-2.12.so
00007ffff7fde000 4 4 4 rw--- libnss_files-2.12.so
00007ffff7fdf000 12 12 12 rw--- [ anon ]
00007ffff7ff4000 8 8 8 rw--- [ anon ]
00007ffff7ff6000 28 20 0 r--s- gconv-modules.cache
00007ffff7ffd000 4 4 4 rw--- [ anon ]
00007ffff7ffe000 4 4 0 r-x-- [ anon ]
00007ffffffde000 132 20 20 rw--- [ stack ]
ffffffffff600000 4 0 0 r-x-- [ anon ]
---------------- ------ ------ ------
total kB 108500 1864 412
问题5:当HugePages被启用时:
Please see the following note for hugepage:
When hugepage is enabled, kernel reserved a specified amount memory for hugepage.
Only the applications support hugepage such as Oracle Database with proper configuration are able to make use of hugepage.
To verify the hugepage utilization:
# cat /proc/meminfo
[SNIP]
HugePages_Total: 1000
HugePages_Free: 600
HugePages_Rsvd: 400
HugePages_Surp: 0
Hugepagesize: 2048 kB
[SNIP]
In the above example, kernel reserved 1000 * 2048kB (HugePages_Total * Hugepagesize) for hugepage.
Part of the hugepage were unused and wasted, the amounts equal to (HugePages_Free - HugePages_Rsvd) * Hugepagesize,
That's (600 - 400) * 2048kb wasted.
It's very important not to waste hugepage, since most applications don't support it.
For Oracle Database, please follow this note for proper hugepage configuration.