log4j.rootCategory=ERROR, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd
HH:mm:ss} %p %c{1}: %m%n
Set the default spark-shell log level to ERROR. When running the spark-shell,
the
log level for this class is used to overwrite the root logger’s log level, so
that
the user can have different defaults for the shell and regular Spark apps.
log4j.logger.org.apache.spark.repl.Main=ERROR
Settings to quiet third party logs that are too verbose
log4j.logger.org.spark_project.jetty=ERROR
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMainexprTyper=ERRORlog4j.logger.org.apache.spark.repl.SparkILoopexprTyper=ERROR
log4j.logger.org.apache.spark.repl.SparkILoopexprTyper=ERRORlog4j.logger.org.apache.spark.repl.SparkILoopSparkILoopInterpreter=ERROR
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR
SPARK-9183: Settings to avoid annoying messages when looking up nonexistent
UDFs in SparkSQL with Hive support
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR
本文详细介绍了Apache Spark的日志配置方法,特别是如何设置log4j来控制不同组件的日志级别,减少不必要的信息输出,提高shell运行效率。针对Spark SQL与Hive集成时遇到的问题也给出了相应的解决策略。
1342

被折叠的 条评论
为什么被折叠?



