Log4j日志输入到Flume

本文介绍如何在SpringBoot项目中移除默认日志包,添加Log4j2和Flume进行日志收集,并配置Flume将日志发送至Kafka。通过修改Maven依赖、配置日志和Flume参数,实现高效日志管理和实时监控。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

一、添加需要的jar包

        移除springboot自带的日志包,添加log4j和flume的jar包

<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
			<exclusions>
				<exclusion>
					<groupId>org.springframework.boot</groupId>
					<artifactId>spring-boot-starter-logging</artifactId>
				</exclusion>
			</exclusions>
		</dependency>

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-log4j2</artifactId>
		</dependency>

		<dependency>
			<groupId>org.apache.flume</groupId>
			<artifactId>flume-ng-embedded-agent</artifactId>
			<version>1.8.0</version>
			<exclusions>
				<exclusion>
					<groupId>org.slf4j</groupId>
					<artifactId>slf4j-api</artifactId>
				</exclusion>
				<exclusion>
					<groupId>org.slf4j</groupId>
					<artifactId>slf4j-log4j12</artifactId>
				</exclusion>
				<exclusion>
					<groupId>log4j</groupId>
					<artifactId>log4j</artifactId>
				</exclusion>
			</exclusions>
		</dependency>

		<dependency>
			<groupId>org.apache.logging.log4j</groupId>
			<artifactId>log4j-flume-ng</artifactId>
		</dependency>

二、项目配置文件(application.properties)添加log4j配置文件

logging.config = classpath:log4j-spring.xml

三、log4j配置文件

<?xml version="1.0" encoding="UTF-8"?>
<!-- Configuration后面的status,这个用于设置log4j2自身内部的信息输出,可以不设置,当设置成trace时, 你会看到log4j2内部各种详细输出。可以设置成OFF(关闭)或Error(只输出错误信息) -->
<Configuration status="OFF">
	<Appenders>
		<!-- 输出控制台日志的配置 -->
		<Console name="console" target="SYSTEM_OUT">
			<!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch) -->
			<ThresholdFilter level="DEBUG" onMatch="ACCEPT" onMismatch="DENY" />
			<!-- 输出日志的格式 -->
			<PatternLayout
				pattern="%d{yyyy-MM-dd HH:mm:ss SSS} [%t] %-5level %logger{36} %marker - %msg%n" />
		</Console>

		<Flume name="flumeAppender" compress="false" type="Embedded">
			<MarkerFilter marker="flume" onMatch="ACCEPT" onMismatch="DENY"/>
			<PatternLayout charset="UTF-8" pattern="%d{yyyy-MM-dd HH:mm:ss SSS} [%t] %-5level %logger{36} %marker - %msg%n"/>
			<Property name="channel.type">memory</Property>
			<Property name="channel.capacity">200</Property>
			<Property name="sinks">agent1</Property>
			<Property name="agent1.type">avro</Property>
			<Property name="agent1.hostname">127.0.0.1</Property>
			<Property name="agent1.port">44444</Property>
			<Property name="agent1.batch-size">100</Property>
			<Property name="processor.type">failover</Property>
		</Flume>
		
		<Async name="async">
			<AppenderRef ref="flumeAppender"/>
		</Async>
		
	</Appenders>
	<Loggers>
		<Logger name="com.start" level="trace" additivity="true">
			<AppenderRef ref="async" />
		</Logger>
		<Root level="debug">
			<AppenderRef ref="console" />
		</Root>
	</Loggers>
</Configuration>                                                                                                                                                                           

四、添加flume配置文件(sink-kafka-example.conf)

       我们把flume采集到的日志信息发送到kafka中,方便展示。

# sink-kafka-example.conf: A single-node Flume configuration

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
# Flume提供了各种source的实现,包括Avro Source、 Exce Source、 Spooling Directory Source、 NetCat Source、 Syslog Source、 Syslog TCP Source、Syslog UDP Source、 HTTP Source、 HDFS Source, etc
a1.sources.r1.type = avro
a1.sources.r1.bind = 127.0.0.1
a1.sources.r1.port = 44444

# Describe the sink
# 包括HDFS sink、 Logger sink、 Avro sink、 File Roll sink、 Null sink、 HBasesink, etc
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = demo
a1.sinks.k1.kafka.bootstrap.servers = localhost:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.k1.kafka.producer.compression.type = snappy

# Use a channel which buffers events in memory
# Flume对于Channel,则提供了Memory Channel、 JDBC Chanel、 File Channel,etc
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

五、启动flume

        将cmd路径切换到flume的bin目录下,启动命令如下,启动时加载的配置文件是sink-kafka-example.conf

flume-ng.cmd  agent -conf ../conf  -conf-file ../conf/sink-kafka-example.conf  -name a1  -property flume.root.logger=INFO,console

六、启动kafka查看结果

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值