一、添加需要的jar包
移除springboot自带的日志包,添加log4j和flume的jar包
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-log4j2</artifactId>
</dependency>
<dependency>
<groupId>org.apache.flume</groupId>
<artifactId>flume-ng-embedded-agent</artifactId>
<version>1.8.0</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-flume-ng</artifactId>
</dependency>
二、项目配置文件(application.properties)添加log4j配置文件
logging.config = classpath:log4j-spring.xml
三、log4j配置文件
<?xml version="1.0" encoding="UTF-8"?>
<!-- Configuration后面的status,这个用于设置log4j2自身内部的信息输出,可以不设置,当设置成trace时, 你会看到log4j2内部各种详细输出。可以设置成OFF(关闭)或Error(只输出错误信息) -->
<Configuration status="OFF">
<Appenders>
<!-- 输出控制台日志的配置 -->
<Console name="console" target="SYSTEM_OUT">
<!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch) -->
<ThresholdFilter level="DEBUG" onMatch="ACCEPT" onMismatch="DENY" />
<!-- 输出日志的格式 -->
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss SSS} [%t] %-5level %logger{36} %marker - %msg%n" />
</Console>
<Flume name="flumeAppender" compress="false" type="Embedded">
<MarkerFilter marker="flume" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout charset="UTF-8" pattern="%d{yyyy-MM-dd HH:mm:ss SSS} [%t] %-5level %logger{36} %marker - %msg%n"/>
<Property name="channel.type">memory</Property>
<Property name="channel.capacity">200</Property>
<Property name="sinks">agent1</Property>
<Property name="agent1.type">avro</Property>
<Property name="agent1.hostname">127.0.0.1</Property>
<Property name="agent1.port">44444</Property>
<Property name="agent1.batch-size">100</Property>
<Property name="processor.type">failover</Property>
</Flume>
<Async name="async">
<AppenderRef ref="flumeAppender"/>
</Async>
</Appenders>
<Loggers>
<Logger name="com.start" level="trace" additivity="true">
<AppenderRef ref="async" />
</Logger>
<Root level="debug">
<AppenderRef ref="console" />
</Root>
</Loggers>
</Configuration>
四、添加flume配置文件(sink-kafka-example.conf)
我们把flume采集到的日志信息发送到kafka中,方便展示。
# sink-kafka-example.conf: A single-node Flume configuration
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
# Flume提供了各种source的实现,包括Avro Source、 Exce Source、 Spooling Directory Source、 NetCat Source、 Syslog Source、 Syslog TCP Source、Syslog UDP Source、 HTTP Source、 HDFS Source, etc
a1.sources.r1.type = avro
a1.sources.r1.bind = 127.0.0.1
a1.sources.r1.port = 44444
# Describe the sink
# 包括HDFS sink、 Logger sink、 Avro sink、 File Roll sink、 Null sink、 HBasesink, etc
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = demo
a1.sinks.k1.kafka.bootstrap.servers = localhost:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.k1.kafka.producer.compression.type = snappy
# Use a channel which buffers events in memory
# Flume对于Channel,则提供了Memory Channel、 JDBC Chanel、 File Channel,etc
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
五、启动flume
将cmd路径切换到flume的bin目录下,启动命令如下,启动时加载的配置文件是sink-kafka-example.conf
flume-ng.cmd agent -conf ../conf -conf-file ../conf/sink-kafka-example.conf -name a1 -property flume.root.logger=INFO,console
六、启动kafka查看结果