#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f231f01ee1b, pid=5948, tid=139787906176768
#
# JRE version: Java(TM) SE Runtime Environment (8.0_60-b27) (build 1.8.0_60-b27)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.60-b23 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# V [libjvm.so+0x68de1b] java_lang_Class::signers(oopDesc*)+0x1b
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
--------------- T H R E A D ---------------
Current thread (0x00007f23180f5000): VMThread [stack: 0x00007f22e86de000,0x00007f22e87de000] [id=6016]
siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000000000000040
启动脚本
/spark/jdk1.8.0_60/bin/java -cp /spark/spark-2.2.0-bin-hadoop2.7/jars/alluxio-1.8.0-client.jar:/spark/spark-2.2.0-bin-hadoop2.7/conf/:/spark/spark-2.2.0-bin-hadoop2.7/jars/* -Xmx5G -XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCTimeStamps -Xloggc:/spark/spark-jobserver/gc.out -XX:MaxPermSize=512m -XX:+CMSClassUnloadingEnabled -XX:MaxDirectMemorySize=5G -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.rmi.port=9999 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dlog4j.configuration=file:/spark/spark-jobserver/log4j-server.properties -DLOG_DIR=/spark/log/job-server -Dspark.executor.uri=/home/spark/spark-1.6.0.tar.gz org.apache.spark.deploy.SparkSubmit --conf spark.driver.memory=5G --conf spark.executor.extraJavaOptions=-Dlog4j.configuration=file:/spark/spark-jobserver/log4j-server.properties -DLOG_DIR=/spark/log/job-server --conf spark.driver.extraJavaOptions=-XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCTimeStamps -Xloggc:/spark/spark-jobserver/gc.out -XX:MaxPermSize=512m -XX:+CMSClassUnloadingEnabled -XX:MaxDirectMemorySize=5G -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.rmi.port=9999 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dlog4j.configuration=file:/spark/spark-jobserver/log4j-server.properties -DLOG_DIR=/spark/log/job-server -Dspark.executor.uri=/home/spark/spark-1.6.0.tar.gz --class spark.jobserver.JobServer /spark/spark-jobserver/spark-job-server.jar /spark/spark-jobserver/local.conf
分析过程
首先这个问题发生的背景是 java进程在拼命的做Full GC, 此时内存达到99.99% , 典型的内存爆掉的场景
这个时候应该首先分析代码,为什么数据会进入driver
当然这个问题在网上也找到相关的, 哪怕内存爆了,java 进程也不应该挂掉
这个在网上看到 gc参数 UseConcMarkSweepGC导致这个bug
换成G1 后 没发生类似的异常.