安装Flink后无法启动
版本 jdk— 1.8 flink — 1.12.0
情景再现:
安装完Flink,在bin目录下运行./start-cluster.sh命令后,jps没有看到相应的服务被启动,查明原因后是因为服务器内存小于Flink启动的内存配置,所以需要在配置文件中配置内存,下面是分析思路,关于服务器在不报错的情况下的无法启动,可以通过查阅log日志文件来分析源头。
分析情况1
cd到log目录下,查看日志情况发现全部都以out为文件类型的日志文件
随便打开一个
cat flink-root-standalonesession-1-VM-0-5-centos.out
>>>
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000d5550000, 715849728, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 715849728 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/local/flink-1.12.0/bin/hs_err_pid4481.log
#翻译
Java HotSpot(TM) 64位服务器VM警告:INFO: os::commit_memory(0x00000000d5550000, 715849728, 0)失败;error='无法分配内存' (errno=12)
#
#内存不足,Java运行时环境无法继续运行
#本机内存分配(mmap)映射715849728字节提交保留内存失败。
包含更多信息的错误报告文件被保存为:
# /usr/local/flink-1.12.0 / bin / hs_err_pid2610.log
告知我们内存分配不足并且还告诉我们一个在bin下的日志报告,我们使用cat打开
cat /usr/local/flink-1.12.0/bin/hs_err_pid2610.log
>>>会出现特别长的一段日志参数信息,重要的部分在开始的部分
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000d5550000, 715849728, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 715849728 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/local/flink-1.12.0/bin/hs_err_pid2610.log
[root@VM-0-5-centos log]# cat /usr/local/flink-1.12.0/bin/hs_err_pid2610.log
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 715849728 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2749), pid=2610, tid=0x00007fe3331ad700
#
# JRE version: (8.0_261-b12) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.261-b12 mixed mode linux-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
告知我们 “写入核心转储失败。核心转储已被禁用。要启用核心转储,请在再次启动Java之前尝试“ulimit -c unlimited”“,其实这里官方在推荐的命令ulimit -c unlimited
小编在尝试之后依旧没有启动成功,这里我们需要在conf下flink-conf.yaml
的配置文件添加如下配置
env.java.opts: -Xms128m -Xmx256m
启动成功!