文章目录
ELK-logstash和filebeat日志收集的简单案例
1 对应的镜像版本
本章只讲解logstash和filebeat需要了解elasticsearch和kibana安装的请看
<<ELK-ElasticSearch和kibana图形化工具>>
elasticsearch:7.2.0
kibana:7.2.0
logstash:7.2.0
filebeat:7.2.0
2 logstash的简单入门
在之前的文档中已经学习了ELK中E(Elasticsearch)和K(kibana)的部署,在本章将会简单学习L(logstash)的学习
2.1 介绍
Logstash 是一款强大的数据处理工具,它可以实现数据传输,格式处理,格式化输出,还有强大的插件功能,常用于日志处理.
logstash我们只让它进行日志处理,处理完之后将其输出到elasticsearch。
2.2 案例
2.2.1 安装logstash
挂载目录说明
创建/home/qw/elk/logstash 这个目录存放Losgtash的配置文件
logstash.yml 空文件(7.x版本镜像中这个有连接es的默认配置 需要覆盖)
logstash.conf 是logstash的核心配置文件需要根据下面的需要配置对应参数
/home/qw/elk/testlog 这个文件夹下存放需要读取的日志
执行命令
docker run -p 5044:5044 --name logstash -d \
--link=es:elasticsearch \
-v /home/qw/elk/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml \
-v /home/qw/elk/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
-v /home/qw/elk/testlog:/home \
logstash:7.2.0
2.2.2 读取指定文件-输出到控制台
input{
file{
path=>"/home/order.log"
discover_interval => 10
start_position => "beginning"
}
}
output{
stdout { codec => rubydebug }
}
2.2.3 读取指定文件-输出到文件
input{
file{
path=>"/home/order.log"
discover_interval => 10
start_position => "beginning"
}
}
output{
file{
path=>"/home/aaa.log"
}
}
ps: 需要注意的是 这里的输出文件必须要求 w的权限 看看是否报错 如果报错需要进入容器赋权
2.2.4 读取指定文件-输出到es
input{
file{
path=>"/home/order.log"
discover_interval => 10
start_position => "beginning"
}
}
output{
elasticsearch{
hosts=>["172.30.66.86:9200"]
index => "test-%{+YYYY.MM.dd}"
}
}
在kibana显示的效果
3 filebeat的简单入门
3.1 介绍
Filebeat是一个轻量级日志传输Agent,可以将指定日志转发到Logstash、Elasticsearch、Kafka、Redis等中。Filebeat占用资源少,而且安装配置也比较简单,支持目前各类主流OS及Docker平台。
3.2 案例
3.2.1 安装filebeat
在这里使用的docker版本的filebeat
elastic/filebeat:7.2.0
3.2.2 重新配置logstash
需要重新配置logstash的配置
input {
beats {
port => "5044"
}
}
filter {
if [fields][doc_type] == 'order' {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} %{JAVALOGMESSAGE:msg}" }
}
}
if [fields][doc_type] == 'customer' {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} %{JAVALOGMESSAGE:msg}" }
}
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "172.30.66.86:9200" ]
index => "%{[fields][doc_type]}-%{+YYYY.MM.dd}"
}
}
3.2.3 配置filebeat提取日志
准备两个日志文件
order.log
2020-04-04 11:29:19,374 INFO WebLogAspect:53 -- 请求:18,SPEND TIME:0
2020-04-04 11:38:20,404 INFO NoticeServiceApplication:664 -- The following profiles are active: test
2020-04-04 11:41:07,754 INFO NoticeServiceApplication:664 -- The following profiles are active: test
2020-04-04 12:38:58,683 INFO RedisClusterConfig:107 -- --- 启动单点Redis ---
2020-04-04 12:39:00,325 DEBUG ApplicationContextRegister:26 --
2020-04-04 12:39:06,961 INFO NoticeServiceApplication:57 -- Started NoticeServiceApplication in 17.667 seconds (JVM running for 18.377)
2020-04-04 11:27:56,577 INFO WebLogAspect:51 -- 请求:19,RESPONSE:"{\"data\":null,\"errorCode\":\"\",\"errorMsg\":\"\",\"repeatAct\":\"\",\"succeed\":true}"
2020-04-04 11:27:56,577 INFO WebLogAspect:53 -- 请求:19,SPEND TIME:1
2020-04-04 11:28:09,829 INFO WebLogAspect:42 -- 请求:20,URL:http://192.168.7.203:30004/sr/flushCache
2020-04-04 11:28:09,830 INFO WebLogAspect:43 -- 请求:20,HTTP_METHOD:POST
2020-04-04 11:28:09,830 INFO WebLogAspect:44 -- 请求:20,IP:192.168.7.98
2020-04-04 11:28:09,830 INFO WebLogAspect:45 -- 请求:20,CLASS_METHOD:com.notice.web.estrictController
2020-04-04 11:28:09,830 INFO WebLogAspect:46 -- 请求:20,METHOD:flushRestrict
2020-04-04 11:28:09,830 INFO WebLogAspect:47 -- 请求:20,ARGS:["{\n}"]
2020-04-04 11:28:09,830 DEBUG SystemRestrictController:231 -- 刷新权限限制链
2020-04-04 11:38:20,404 INFO NoticeServiceApplication:664 -- The following profiles are active: test
2020-04-04 11:41:07,754 INFO NoticeServiceApplication:664 -- The following profiles are active: test
2020-04-04 11:41:40,664 INFO NoticeServiceApplication:664 -- The following profiles are active: test
2020-04-04 11:43:38,224 INFO NoticeServiceApplication:664 -- The following profiles are active: test
2020-04-04 11:47:49,141 INFO NoticeServiceApplication:664 -- The following profiles are active: test
2020-04-04 11:51:02,525 INFO NoticeServiceApplication:664 -- The following profiles are active: test
2020-04-04 11:52:28,726 INFO NoticeServiceApplication:664 -- The following profiles are active: test
2020-04-04 11:53:55,301 INFO NoticeServiceApplication:664 -- The following profiles are active: test
2020-04-04 11:54:26,717 INFO NoticeServiceApplication:664 -- The following profiles are active: test
2020-04-04 11:58:48,834 INFO NoticeServiceApplication:664 -- The following profiles are active: test
2020-04-04 12:38:51,126 INFO NoticeServiceApplication:664 -- The following profiles are active: test
2020-04-04 12:38:58,683 INFO RedisClusterConfig:107 -- --- 启动单点Redis ---
2020-04-04 12:39:00,325 DEBUG ApplicationContextRegister:26 -- ApplicationContextRegister.setApplicationContext:applicationContextorg.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@5f150435: startup date [Tue Dec 26 12:38:51 CST 2017]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext@63c12fb0
2020-04-04 12:39:06,961 INFO NoticeServiceApplication:57 -- Started NoticeServiceApplication in 17.667 seconds (JVM running for 18.377)
customer.log
2020-04-04 10:05:56,476 INFO ConfigClusterResolver:43 - Resolving eureka endpoints via configuration
2020-04-04 10:07:23,529 INFO WarehouseController:271 - findWarehouseList,json{"formJSON":{"userId":"885769620971720708"},"requestParameterMap":{},"requestAttrMap":{"name":"asdf","user":"8857696","ip":"183.63.112.1","source":"asdfa","customerId":"885768861337128965","IMEI":"863267033748196","sessionId":"xm1cile2bcmb15wtqmjno7tgz","sfUSCSsadDDD":"asdf/10069&ADR&1080&1920&OPPO R9s Plus&Android6.0.1","URI":"/warehouse-service/appWarehouse/findByCustomerId.apec","encryptType":"2","requestStartTime":3450671468321405}}
2020-04-04 10:07:23,650 INFO WarehouseServiceImpl:325 - warehouse list:8,warehouse str:[{"addressDetail":"nnnnnnnn","areaId":"210624","areaNa":""}]
2020-04-04 10:10:56,477 INFO ConfigClusterResolver:43 - Resolving eureka endpoints via configuration
2020-04-04 10:15:56,477 INFO ConfigClusterResolver:43 - Resolving eureka endpoints via configuration
2020-04-04 10:20:56,478 INFO ConfigClusterResolver:43 - Resolving eureka endpoints via configuration
2020-04-04 10:05:56,476 INFO ConfigClusterResolver:43 - Resolving eureka endpoints via configuration
2020-04-04 10:07:23,529 INFO WarehouseController:271 - findWarehouseList,json{"formJSON":{"userId":"885769620971720708"}}]
2020-04-04 10:10:56,477 INFO ConfigClusterResolver:43 - Resolving eureka endpoints via configuration
2020-04-04 10:15:56,477 INFO ConfigClusterResolver:43 - Resolving eureka endpoints via configuration
2020-04-04 10:20:56,478 INFO ConfigClusterResolver:43 - Resolving eureka endpoints via configuration
配置filebeat.yml文件
filebeat.inputs:
- paths:
- /home/order.log
multiline:
pattern: ^\d{4}
negate: true
match: after
fields:
doc_type: order
- paths:
- /home/customer.log
multiline:
pattern: ^\d{4}
negate: true
match: after
fields:
doc_type: customer
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: ["172.30.66.86:5044"]
ps注意 7.x和之前的版本有些配置不一样 详细情况官方配置说明
官方网站
启动
docker run --name filebeat -d \
-v /home/qw/elk/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml \
-v /home/qw/elk/testlog/:/home/ \
elastic/filebeat:7.2.0
效果
可以看到 在kibana中多了两个索引
需要配置
创建一个
选择
最终展示
到这里简单收集日志就完成了,需要更多复杂业务配置,需要大家根据需求自己配置详细信息.
参考 : 使用Docker搭建ELK日志系统