背景:有个一数据库记录增量更新线程,运行过程中,吃内存较大,打算用Java VIsualVM与MemoryAnalyzer分析一下原因。
首先启动服务器,用Java VIsualVM监视JVM进程界面如下:
[img]http://dl2.iteye.com/upload/attachment/0123/3313/f8206729-4629-3aa1-b842-e3d83a68811a.png[/img]
启动增量更新线程
在吃掉1.3G左右内存的时候Dump堆内存,快照如下:
[img]http://dl2.iteye.com/upload/attachment/0123/3315/9c62eaf3-a2d1-3579-8d1a-c738bbe6c458.png[/img]
从上面可以看出主要是Oracle读取记录所有消耗的内存较多;
打开byte[]的实例数,查看引用:
[img]http://dl2.iteye.com/upload/attachment/0123/3317/36227651-5bb9-38c9-9f4e-de0ddb66d0f4.png[/img]
从引用看,可以看出byte[]的引用主要是HashMap$Entry
将堆快照另存为heapdump.hprof
再用MemoryAnalyzer分析,首先用打开堆快照文件heapdump.hprof,在这个之前,要保证,MemoryAnalyzer的最大内存要大于,堆快照内存,可以通过修改MemoryAnalyzer的配置文件
MemoryAnalyzer.ini
我的如下:
-startup
plugins/org.eclipse.equinox.launcher_1.3.100.v20150511-1540.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.win32.win32.x86_64_1.1.300.v20150602-1417
-vmargs
-Xmx2560m
由于我的堆快照的内存有1.3G,所以设置的最大内存为2.5G
MemoryAnalyzer分析完后主界面如下:
[img]http://dl2.iteye.com/upload/attachment/0123/3319/8c94d149-4e16-3fe4-af43-e08d6e1a0dae.png[/img]
我们需要关注的是实例柱状图,内存泄漏报告和大对象
Leak Suspects: includes leak suspects and a system overview
[img]http://dl2.iteye.com/upload/attachment/0123/3321/876d0b45-374b-34b1-b178-fd97e3f6cc53.png[/img]
The thread org.apache.tomcat.util.threads.TaskThread @ 0x791e33e50 http-apr-8080-exec-8 keeps local variables with total size 992,879,152 (95.20%) bytes.
The memory is accumulated in one instance of "java.util.Hashtable$Entry[]" loaded by "<system class loader>".
The stacktrace of this Thread is available. See stacktrace.
Keywords
java.util.Hashtable$Entry[]
从上面描述,内存主要被java.util.Hashtable$Entry[]实例所以占
Histogram: Lists number of instances per class
[img]http://dl2.iteye.com/upload/attachment/0123/3323/5ead36fb-aad8-384a-a65e-a9ca640888c3.png[/img]
从Histogram来看,主要是Oracle数据占用内存
Dominator Tree: List the biggest objects and what they keep alive.
[img]http://dl2.iteye.com/upload/attachment/0123/3325/0c045877-5bc9-334c-abcd-6c33a43e18a0.png[/img]
从上面来看Dominator Tree,主要是Oracle连接线程中的Hashtable$Entry的实例占用着内存;
线程实例图:
[img]http://dl2.iteye.com/upload/attachment/0123/3327/9bfa3bf0-6ea3-3188-83d9-6b3293c8aab6.png[/img]
总结:
[color=green]从上面的分析来看主要是Hashtable$Entry的实例占用着内存,这是由于我们把查询出来到的数据库记录放在了HashMap中导致的内存飙升,优化分页查询更新,去除不必要的HashMap对象的引用。[/color]
首先启动服务器,用Java VIsualVM监视JVM进程界面如下:
[img]http://dl2.iteye.com/upload/attachment/0123/3313/f8206729-4629-3aa1-b842-e3d83a68811a.png[/img]
启动增量更新线程
在吃掉1.3G左右内存的时候Dump堆内存,快照如下:
[img]http://dl2.iteye.com/upload/attachment/0123/3315/9c62eaf3-a2d1-3579-8d1a-c738bbe6c458.png[/img]
从上面可以看出主要是Oracle读取记录所有消耗的内存较多;
打开byte[]的实例数,查看引用:
[img]http://dl2.iteye.com/upload/attachment/0123/3317/36227651-5bb9-38c9-9f4e-de0ddb66d0f4.png[/img]
从引用看,可以看出byte[]的引用主要是HashMap$Entry
将堆快照另存为heapdump.hprof
再用MemoryAnalyzer分析,首先用打开堆快照文件heapdump.hprof,在这个之前,要保证,MemoryAnalyzer的最大内存要大于,堆快照内存,可以通过修改MemoryAnalyzer的配置文件
MemoryAnalyzer.ini
我的如下:
-startup
plugins/org.eclipse.equinox.launcher_1.3.100.v20150511-1540.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.win32.win32.x86_64_1.1.300.v20150602-1417
-vmargs
-Xmx2560m
由于我的堆快照的内存有1.3G,所以设置的最大内存为2.5G
MemoryAnalyzer分析完后主界面如下:
[img]http://dl2.iteye.com/upload/attachment/0123/3319/8c94d149-4e16-3fe4-af43-e08d6e1a0dae.png[/img]
我们需要关注的是实例柱状图,内存泄漏报告和大对象
Leak Suspects: includes leak suspects and a system overview
[img]http://dl2.iteye.com/upload/attachment/0123/3321/876d0b45-374b-34b1-b178-fd97e3f6cc53.png[/img]
The thread org.apache.tomcat.util.threads.TaskThread @ 0x791e33e50 http-apr-8080-exec-8 keeps local variables with total size 992,879,152 (95.20%) bytes.
The memory is accumulated in one instance of "java.util.Hashtable$Entry[]" loaded by "<system class loader>".
The stacktrace of this Thread is available. See stacktrace.
Keywords
java.util.Hashtable$Entry[]
从上面描述,内存主要被java.util.Hashtable$Entry[]实例所以占
Histogram: Lists number of instances per class
[img]http://dl2.iteye.com/upload/attachment/0123/3323/5ead36fb-aad8-384a-a65e-a9ca640888c3.png[/img]
从Histogram来看,主要是Oracle数据占用内存
Dominator Tree: List the biggest objects and what they keep alive.
[img]http://dl2.iteye.com/upload/attachment/0123/3325/0c045877-5bc9-334c-abcd-6c33a43e18a0.png[/img]
从上面来看Dominator Tree,主要是Oracle连接线程中的Hashtable$Entry的实例占用着内存;
线程实例图:
[img]http://dl2.iteye.com/upload/attachment/0123/3327/9bfa3bf0-6ea3-3188-83d9-6b3293c8aab6.png[/img]
总结:
[color=green]从上面的分析来看主要是Hashtable$Entry的实例占用着内存,这是由于我们把查询出来到的数据库记录放在了HashMap中导致的内存飙升,优化分页查询更新,去除不必要的HashMap对象的引用。[/color]