整体架构
- 企业实际实战中,elk是成熟且⼴泛使⽤的⽅案。
- 进⼊elk前,部署kafka,作为统⼀⼊⼝和出⼝,假如⼤数据部⻔需要,⾃⼰连kafka即可。
- ⽇志两种收集⽅式,⼀是吐(业务信息,kafka appender),⼆是抓(⽇志⽂件,filebeat)。
- 主动吐更适合当前场景,kafka的另⼀头,⽇志平台订阅消息接收
kafka
启动
#docker启动
#启动zookeeper
mkdir -p /opt/data/zksingle
docker run --name zookeeper -v /opt/data/zksingle:/data -p
2181:2181 -e ZOO_LOG4J_PROP="INFO,ROLLINGFILE" -d
zookeeper:3.4.13
#启动kafka
docker run -d --name kafka \
-p 9103:9092 \
--link zookeeper:zookeeper \
--env KAFKA_BROKER_ID=100 \
--env HOST_IP=52.82.98.209 \
--env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
--env KAFKA_ADVERTISED_HOST_NAME=52.82.98.209 \
--env KAFKA_ADVERTISED_PORT=9103 \
--restart=always \
--volume /etc/localtime:/etc/localtime \
wurstmeister/kafka:2.12-2.2.2
验证
#进⼊容器
docker exec -it kafka sh
cd /opt/kafka_2.12-2.2.2/bin
#客户端监听(该步会⾃动创建topic)
./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic demo --from-beginning
#另起⼀个终端,验证发送
./kafka-console-producer.sh --broker-list localhost:9092 --topic demo
km
kafka-manager是⽬前最受欢迎的kafka集群管理⼯具,最早由雅⻁开源。
提供可视化kafka集群操作
官⽹:下载地址
注意它的版本,docker社区的景象版本滞后于kafka,我们⾃⼰来打镜像。
启动
#Dockerfile
FROM daocloud.io/library/java:openjdk-8u40-jdk
ADD kafka-manager-2.0.0.2/ /opt/km2002/
CMD ["/opt/km2002/bin/kafka-manager","-
Dconfig.file=/opt/km2002/conf/application.conf"]
#打包,注意将kafka-manager-2.0.0.2放到同⼀⽬录
docker build -t km:2002 .
#启动
docker run --link zookeeper:zookeeper -d --name km -p 9104:9000 km:2002
验证
访问9104端⼝,查看集群启动状态
查看topic是否有前⾯的demo
elk
启动
docker run -d -p 9102:5601 -p 9200:9200 -p 5044:5044 -e ES_MIN_MEM=128m -e ES_MAX_MEM=1024m -it --name elk docker.io/sebp/elk:751
但是这个官⽅镜像同样存在问题:(进镜像验证)
- 原始的elk镜像没有集成中⽂分词
- 原始镜像打开了5044端⼝,给filebeat做输⼊的,⽽我们需要它读kafka
重打镜像
1)将分词器打进去
ik分词器下载地址(注意对应版本)
#Dockerfile
FROM docker.io/sebp/elk:793
ADD ik/ /opt/elasticsearch/plugins/ik/
RUN chown -R elasticsearch:elasticsearch
/opt/elasticsearch/plugins/ik/
#打包,注意将分词器⽂件夹ik,和dockerfile放在同⼀⽬录下
docker build -t elk:v1 .
#启动,注意将logstash的配置挂上去(启动前将下⾯的kafka.conf放到 -v
后⾯的第⼀个⽬录⾥ !)
docker run -d -v /opt/app/elk/docker/elk/logstash:/etc/logstash/conf.d -p 9102:5601 -p 9200:9200 -p 9300:9300 -e ES_MIN_MEM=128m -e ES_MAX_MEM=1024m -it --name elk elk:793
附:kafka.conf⽂件
input {
kafka {
bootstrap_servers => ["52.82.98.209:9103"]
group_id => "logstash"
topics => ["mylog"]
add_field => {"channel" => "kafka"}
codec => json {
charset => "UTF-8"
}
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "mylog"
}
stdout {
}
}
验证
使⽤post⼯具,或者curl指令:
curl http://localhost:9200/_analyze -X POST -H ‘ContentType:application/json’ -d ‘{“text”:“test elasticsearch 测试分词效果”,“analyzer”: “ik_smart”}’
kibana验证
- 访问 9102 端⼝,通过kibana创建index
- 重复kafka验证的步骤,从kafka容器发送信息,在kibanba中查询
- 在kibana中查询分词试试
项目日志埋点
要抓取并推向kafka的信息:
- rid(requestId):每次请求⽣成⼀个新的,⽣成后⼀直传递到调⽤结束
- sid(sessionId):⽤户的标记,⽤户登陆后记录,后续所有请求都带上
- tid(terminalId):pc,⼿机,不同终端不同
- time:方法耗时
- url: 请求url
- web:服务名
关键点:
- 日志信息封装—>ThreadLocal
- 跨服务传递------>resttemplate重写
- 方法耗时--------->切面类环绕通知
与kafka集成
配置参考:https://github.com/danielwegener/logback-kafka-appender
<dependency>
<groupId>com.github.danielwegener</groupId>
<artifactId>logback-kafka-appender</artifactId>
<version>>0.2.0-RC2</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.2.2</version>
</dependency>
logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="30 seconds" debug="false">
<!-- lOGGER PATTERN 根据个人喜好选择匹配 -->
<property name="logPattern"
value="%msg%n"></property>
<!-- 控制台的标准输出 -->
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<charset>UTF-8</charset>
<pattern>${logPattern}</pattern>
</encoder>
</appender>
<!--第三方appender-->
<appender name="third_kafka" class="com.github.danielwegener.logback.kafka.KafkaAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%msg%n</pattern>
</encoder>
<producerConfig>bootstrap.servers=52.82.98.209:9103</producerConfig>
<topic>mylog</topic>
</appender>
<logger name="kafka">
<appender-ref ref="third_kafka"/>
</logger>
<root level="INFO">
<appender-ref ref="STDOUT"/>
</root>
</configuration>
过滤器抓取用户信息
/**
* filter拦截请求
* 无论直接发起的请求,还是微服务之间的调用,都会经过filter
* 生成logbean,并在threadlocal中传递,保障jvm内的上下文保持
*/
@Configuration
@Order(1)
@WebFilter(filterName = "logFilter", urlPatterns = "/*")
public class LogFilter implements Filter {
@Value("${spring.application.name}")
String appName;
@Override
public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
HttpServletRequest httpServletRequest = (HttpServletRequest) request;
//sid,取cookie,用户登陆后会设置
String cookieVal=null;
Cookie[] cookies = httpServletRequest.getCookies();
if (cookies != null){
for (Cookie cookie : cookies) {
if ("sid".equals(cookie.getName())){
cookieVal = cookie.getValue();
break;
}
}
}
//rid,tid,sid,先从header头获取
//如果没有,那就说明是web模块的,直接生成
String rid = StringUtils.defaultIfBlank(httpServletRequest.getHeader("rid"),CommonUtils.getRandomStr(10));
String tid = StringUtils.defaultIfBlank(httpServletRequest.getHeader("tid"),CommonUtils.getDevice(httpServletRequest.getHeader("User-Agent")));
String sid = StringUtils.defaultString(httpServletRequest.getHeader("sid"),cookieVal);
String url = httpServletRequest.getRequestURI();
//创建LogBean对象,装载上面的信息
LogBean logBean = new LogBean(rid,sid,tid);
logBean.setFrom(appName);
logBean.setUrl(url);
//扔到ThreadLocal里
LogUtil.setLocalInstance(logBean);
//打一条信息,表示进入filter了!
LogUtil.log("I am filter");
chain.doFilter(request, response);
//请求结束后,释放ThreadLocal防止线程复用
LogUtil.removeLocalInstance();
}
}
工具类
public class LogUtil {
private static final Logger logger = LoggerFactory.getLogger("kafka");
private final static ThreadLocal<LogBean> logBeanThreadLocal = new ThreadLocal<>();
public static void setLocalInstance(LogBean bean){
logBeanThreadLocal.set(bean);
}
public static LogBean getLocalInstance(){
return logBeanThreadLocal.get();
}
public static void removeLocalInstance(){
logBeanThreadLocal.remove();
}
/**
* 常规链路追踪日志
* @param message
*/
public static void log(String message){
LogBean bean = logBeanThreadLocal.get();
if (bean == null){
throw new RuntimeException("Local Logbean not exist!");
}
bean.setMessage(message);
logger.info(JSON.toJSONString(bean));
}
/**
* 时间统计
* @param event 事件名称
* @param time 花费时间
*/
public static void cost(String event,long time){
LogBean bean = logBeanThreadLocal.get();
if (bean == null){
throw new RuntimeException("Local Logbean not exist!");
}
bean.setMessage(event);
bean.setTime(time);
logger.info(JSON.toJSONString(bean));
//打完日志后,抹掉时间信息,防止threadlocal往下传递
bean.setTime(null);
}
}
切面类环绕通知
/**
* 日志切面
*/
@Aspect
@Component
public class LogAspect {
private final static Logger logger = LoggerFactory.getLogger("kafka");
/**
* 凡是打过LogInfo注释的均会被拦截
*/
@Pointcut("@annotation(com.itheima.logdemo.utils.LogInfo)")
public void log() {}
/**
* 环绕通知
*/
@Around(value = "log()")
public Object arround(ProceedingJoinPoint pjp) {
try {
MethodSignature signature = (MethodSignature) pjp.getSignature();
String className = pjp.getTarget().getClass().getSimpleName();
String methodName = signature.getName();
LogUtil.log("before "+className+"."+methodName);
//方法执行
long start = System.currentTimeMillis();
Object o = pjp.proceed();
long end = System.currentTimeMillis();
//耗时统计,message以cost开头的日志
LogUtil.cost("cost:"+className+"."+methodName,end-start);
LogUtil.log("after "+className+"."+methodName);
return o;
} catch (Throwable e) {
e.printStackTrace();
return null;
}
}
}
日志分析
使⽤kibana的图表功能,逐个创建,注意所选的x和y坐标
Visualizations:创建可视化视图,结合需求创建
Dashboard:可合成多个视图,大屏展示