A fatal error has been detected SIGSEGV (0xb)

当Java进程内存达到99.99%,频繁进行Full GC时,出现致命错误`A fatal error has been detected SIGSEGV (0xb)`。通常这种情况不应导致进程挂掉。研究发现,使用`UseConcMarkSweepGC`垃圾收集器可能是问题原因,改用G1垃圾收集器后,异常未再发生。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007f231f01ee1b, pid=5948, tid=139787906176768
#
# JRE version: Java(TM) SE Runtime Environment (8.0_60-b27) (build 1.8.0_60-b27)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.60-b23 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# V  [libjvm.so+0x68de1b]  java_lang_Class::signers(oopDesc*)+0x1b
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#

---------------  T H R E A D  ---------------

Current thread (0x00007f23180f5000):  VMThread [stack: 0x00007f22e86de000,0x00007f22e87de000] [id=6016]

siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000000000000040

启动脚本

/spark/jdk1.8.0_60/bin/java -cp /spark/spark-2.2.0-bin-hadoop2.7/jars/alluxio-1.8.0-client.jar:/spark/spark-2.2.0-bin-hadoop2.7/conf/:/spark/spark-2.2.0-bin-hadoop2.7/jars/* -Xmx5G -XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCTimeStamps -Xloggc:/spark/spark-jobserver/gc.out -XX:MaxPermSize=512m -XX:+CMSClassUnloadingEnabled -XX:MaxDirectMemorySize=5G -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.rmi.port=9999 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dlog4j.configuration=file:/spark/spark-jobserver/log4j-server.properties -DLOG_DIR=/spark/log/job-server -Dspark.executor.uri=/home/spark/spark-1.6.0.tar.gz org.apache.spark.deploy.SparkSubmit --conf spark.driver.memory=5G --conf spark.executor.extraJavaOptions=-Dlog4j.configuration=file:/spark/spark-jobserver/log4j-server.properties               -DLOG_DIR=/spark/log/job-server --conf spark.driver.extraJavaOptions=-XX:+UseConcMarkSweepGC          -verbose:gc -XX:+PrintGCTimeStamps -Xloggc:/spark/spark-jobserver/gc.out          -XX:MaxPermSize=512m          -XX:+CMSClassUnloadingEnabled  -XX:MaxDirectMemorySize=5G            -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true            -Dcom.sun.management.jmxremote.port=9999            -Dcom.sun.management.jmxremote.rmi.port=9999            -Dcom.sun.management.jmxremote.authenticate=false            -Dcom.sun.management.jmxremote.ssl=false -Dlog4j.configuration=file:/spark/spark-jobserver/log4j-server.properties               -DLOG_DIR=/spark/log/job-server -Dspark.executor.uri=/home/spark/spark-1.6.0.tar.gz  --class spark.jobserver.JobServer /spark/spark-jobserver/spark-job-server.jar /spark/spark-jobserver/local.conf

分析过程

首先这个问题发生的背景是 java进程在拼命的做Full GC, 此时内存达到99.99% , 典型的内存爆掉的场景
这个时候应该首先分析代码,为什么数据会进入driver

当然这个问题在网上也找到相关的, 哪怕内存爆了,java 进程也不应该挂掉
这个在网上看到 gc参数 UseConcMarkSweepGC导致这个bug

换成G1 后 没发生类似的异常.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值