Flink1.11 on Yarn默认日志是log4j MainAppender非滚动的。如果是流式任务,任务就会常驻,这样日志文件会越来越大(尤其是Task Managers日志),不加处理磁盘空间就会越占越大,页面日志加载响应也会卡顿。所以可以对Flink的日志做一个滚动配置,这样就可以控制日志文件大小。
flink on yarn用的log配置默认是flink/conf/log4j.properties配置文件,可修改为:
# 滚动日志的配置
# This affects logging for both user code and Flink
rootLogger.level = DEBUG
rootLogger.appenderRef.rolling.ref = RollingFileAppender
# Uncomment this if you want to _only_ change Flink's logging
#logger.flink.name = org.apache.flink
#logger.flink.level = INFO
# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to manually
# change the log levels here.
logger.akka.name = akka
logger.akka.level = INFO
logger.kafka.name= org.apache.kafka
logger.kafka.level = INFO
logger.hadoop.name = org.apache.hadoop
logger.hadoop.level = INFO
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = INFO
# Log all infos in t
Flink1.11日志滚动配置详解

本文介绍了Flink1.11在Yarn上运行时默认日志设置的问题,即日志文件随任务运行不断增大,可能导致磁盘空间占用过多。为解决此问题,文章提供了对Flink日志进行滚动配置的方法,包括修改`log4j.properties`文件,设置滚动策略如日志大小限制、滚动间隔等,以控制日志文件大小并保持系统稳定运行。
最低0.47元/天 解锁文章
1143

被折叠的 条评论
为什么被折叠?



