编写程序(windows上直接运行)
1.新建一个maven工程
1.1 建好后,在项目上右击 --> Add Framework Support --> 勾选scala
1.2 在src/main下新建一个directory (scala)–> 点击scala,右键 --> Mark Directory AS --> Sources Root
2.日志文件配置(设置只打印Error级别的日志)
2.1 在src/main/resources下新建 --> File(名为log4j.properties)
2.2 添加配置信息
log4j.rootCategory=ERROR, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{
yy/MM/dd HH:mm:ss} %p %c{
1}: %m%n
# Set the default spark-shell log level to ERROR. When running the spark-shell, the
# log level for this class is used to overwrite the root logger's log level, so that
# the user can have different defaults for the shell and regular Spark apps.
log4j.logger.org.apache.spark.repl.Main=ERROR
# Settings to quiet third party logs that are too verbose
log4j.logger.org.spark_project.jetty=ERROR
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=ERROR
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=ERROR
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR
# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR
3.pom.xml中添加依赖
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>3.0.0</version>
</dependency>
</dependencies>
<build>

本文档详细介绍了如何在Windows环境下搭建一个Scala Maven工程,并配置日志只显示Error级别。同时,提供了两个WordCount程序示例,分别输出到控制台和文件,演示了Spark的基本用法。
最低0.47元/天 解锁文章
400

被折叠的 条评论
为什么被折叠?



