该文是接着上篇 Log4j直接发送数据到Flume + Kafka (方式一) 所提到的问题进行展开记录的.
主要方式就是通过继承log4j的Appenders, 采用flume rpc动态连接flume服务去发送logs信息.
- flume sdk包:
<dependency> <groupId>org.apache.flume</groupId> <artifactId>flume-ng-sdk</artifactId> <version>1.7.0</version> </dependency>
- 动态连接flume的代码:
public class FindAsyncLog4j2Appender extends AbstractAppender {
//dosomething...
private void connect(){
Properties props = new Properties();
if(isSingleSink(hosts)) {
props.setProperty(RpcClientConfigurationConstants.CONFIG_HOSTS, "h1");
props.setProperty(RpcClientConfigurationConstants.CONFIG_HOSTS_PREFIX + "h1", hosts);
props.setProperty(RpcClientConfigurationConstants.CONFIG_CONNECT_TIMEOUT, String.valueOf(timeout));
props.setProperty(RpcClientConfigurationConstants.CONFIG_REQUEST_TIMEOUT, String.valueOf(timeout));
} else {
props = getProperties(hosts, timeout);
}
try {
rpcClient = RpcClientFactory.getInstance(props);
if(!isStarted()){
start();
}
} catch (FlumeException e) {
String errormsg = "RPC client creation failed! " + e.getMessage();
LOGGER.error(errormsg);
throw e;
}
}
/**
* Create an AsyncAppender.
*
* @param blocking True if the Appender should wait when the queue is full. The default is true.
* @param shutdownTimeout How many milliseconds the Appender should wait to flush outstanding log events
* in the queue on shutdown. The default is zero which means to wait forever.
* @param size The size of the event queue. The default is 128.
* @param name The name of the Appender.
* @param filter The Filter or null.
* @return The AsyncAppender.
*/
@PluginFactory
public static FindAsyncLog4j2Appender createAppender(
@PluginAttribute(value = "blocking", defaultBoolean = true) boolean blocking,
@PluginAttribute(value = "shutdownTimeout") long shutdownTimeout,
@PluginAttribute(value = "bufferSize") int size,
@PluginAttribute("name") final String name,
@PluginAttribute("hosts") String hosts,
@PluginAttribute(value = "timeout") Long timeout,
@PluginElement("Filter") Filter filter,
@PluginAttribute("application") String application,
@PluginElement("Layout") Layout<? extends Serializable> layout) {
if (name == null ) {
LOGGER.error("No name provided for FindAsyncLog4j2Appender");
return null;
}
if (hosts == null ) {
LOGGER.error("No host provided for FindAsyncLog4j2Appender");
return null;
}
if (application == null) {
LOGGER.error("No application provided for FindAsyncLog4j2Appender");
return null;
}
if (layout == null) {
layout = PatternLayout.createDefaultLayout();
}
return new FindAsyncLog4j2Appender(name, filter, layout, size, blocking,
shutdownTimeout, hosts, timeout, application);
}
//dosomething...
}
-
添加appender
业务程序中初始化FindAsyncLog4j2Appender, 并addAppender到log4j 配置中去.
FindAsyncLog4j2Appender appender = FindAsyncLog4j2Appender
.createAppender( false, 0, 1024*10, appenderName, flumeHost,
3000L, null,"testApp", layout);
appender.start();
config.addAppender(appender);
AppenderRef ref = AppenderRef.createAppenderRef(appenderName, null, null);
AppenderRef[] refs = new AppenderRef[]{ref};
LoggerConfig loggerConfig = LoggerConfig.createLogger(false, Level.INFO, loggerName,
"true", refs, null, config, null);
loggerConfig.addAppender(appender, null, null);
config.addLogger(loggerName, loggerConfig);
启动业务程序, 在kafka 消费topic中查看结果. 启动步骤请查看上一篇章:Log4j直接发送数据到Flume + Kafka (方式一)
如果手动停止flume服务, 不影响业务程序正常工作, 再次启动flume服务后, 程序会自动重新连接到flume 并发送logs, 如下图:
该业务程序demo代码请参看 https://github.com/spring410/springbootlog4jflume
本文介绍了一种通过继承Log4j2的Appenders,利用flume-ng-sdk包实现动态连接Flume服务的方法,使得业务程序能够在不中断的情况下自动重新连接到Flume并发送日志信息。
1万+

被折叠的 条评论
为什么被折叠?



