logstash件管理
logstash-plugin
- 查看已安装插件列表 /usr/share/logstash/bin/logstash-plugin list
- 在线安装插件 /usr/share/logstash/bin/logstash-plugin install logstash-filter-multiline
- 升级插件 /usr/share/logstash/bin/logstash-plugin update logstash-filter-multiline
- 卸载 /usr/share/logstash/bin/logstash-plugin uninstall logstash-filter-multiline
Tomcat日志
如果不是以 “[“开头的日志 都跟上一个日志合并在一起。以此类推遇到其他的多行日志也可以按照这个方法来做合并。
filter {
if [type] == "tomcat_error" {
multiline {
pattern => "^[^\[]"
what => "previous"
}
mutate {
split => ["message", "|"]
}
grok {
match => {
"message" => "(?m)%{TIMESTAMP_ISO8601:logtime}"
}
}
}
}
logstash 基本语法
区段 section
定义区域
{ }
定义区域内的插件键值
{
input {
stdin {}
syslog {}
}
}
数据类型
debug => true #bool
host => "hostname" #string
port => 112 # 数值
match => ["datetime","UNIX","ISO8601"] # 数组 array
options => {
key1 => "value1",
key2 => "value2"
} # hash
条件判断(condition)
equality, etc: ==, !=, <, >, <=, >=
regexp: =~, !~
inclusion: in, not in
boolean: and, or, nand, xor
unary: !()
例子
if "_grokparsefailure" not in [tags] {
} else if [status] !~ /^2\d\d/ and [url] == "/noc.gif" {
} else {
}
读取文件
input
file {
path => ["/var/log/*.log", "/var/log/message"]
type => "system"
start_position => "beginning"
}
}
- 1 通常你要导入原有数据进 Elasticsearch 的话,你还需要 filter/date插件来修改默认的”@timestamp” 字段值。稍后会学习这方面的知识。
- 2 FileWatch 只支持文件的绝对路径,而且会不自动递归目录。所以有需要的话,请用数组方式都写明具体哪些文件。
- 3 LogStash::Inputs::File 只是在进程运行的注册阶段初始化一个 FileWatch 对象。所以它不能支持类似 fluentd 那样的 path => "/path/to/%{+yyyy/MM/dd/hh}.log" 写法。达到相同目的,你只能写成 path => "/path/to////.log"。 start_position 仅在该文件从未被监听过的时候起作用。如果 sincedb 文件中已经有这个文件的 inode 记录了,那么 logstash 依然会从记录过的 pos 开始读取数据。所以重复测试的时候每回需要删除 sincedb 文件。
参考:https://blog.youkuaiyun.com/qq_34646817/article/details/81232083
正则语法规则
logstash + grok 正则语法
正则语法规则 例:
日志格式如下
[vclound][2015-11-03 03:35:50,283][INFO][/usr/lib/python2.6/site-packages/urllib3/connectionpool.py:203][_new_conn][-][140192616544000]=[Starting new HTTP connection (1): 240.10.129.80]
[vclound][2015-11-03 03:35:50,381][DEBUG][/usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295][_make_request][-][140192616544000]=["POST /v2.0/tokens HTTP/1.1" 200 3080]
[vclound][2015-11-03 03:35:50,384][INFO][/usr/lib/python2.6/site-packages/urllib3/connectionpool.py:203][_new_conn][-][140192616544000]=[Starting new HTTP connection (1): 240.10.129.160]
[vclound][2015-11-03 03:35:50,454][DEBUG][/usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295][_make_request][-][140192616544000]=["GET /v2/bb0b51d166254dc99bc7462c0ac002ff/servers/b4b530e7-cd9b-42c1-bcd4-a48140726846 HTTP/1.1" 404 73]
logstash 正则规则参考 (下面代码, 编辑器无法显示, 请点击 view plain 进行阅读)
filter {
if [type] == "pinyun" {
grok {
match => { "message" => "\[%{USERNAME:username}\]\[%{TIMESTAMP_ISO8601:time}\]\[%{LOGLEVEL:loglevel}\]\[%{PROG:filepath}\]\[%{PROG:function}\]\[-\]\[%{BASE16NUM:progid}\]\=\[%{GREEDYDATA:info}\]" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
}
}
tomcat 例子1
2018-09-20 22:45:01,849 WARN [monitor:32] 请求ip=220.188.176.xxx,uri=/device/monitor,耗时=27ms
#源数据
%{TIMESTAMP_ISO8601:time_local} %{LOGLEVEL:Level} %{GREEDYDATA:info}
#logstash grok
{
"time_local": [
[
"2018-09-20 22:45:01,849"
]
],
"Level": [
[
"WARN"
]
],
"info": [
[
"[monitor:32] 请求ip=220.188.176.2xx,uri=/device/monitor,耗时=27ms uri=/device/monitor"
]
]
}
tomcat 例子2
2018-10-26 16:48:00.023 [c7f1db98738b4e699557f3bde379056e] [tradeService] [closePayment] [pool-3-thread-1] INFO c.j.j.t.c.c.LoggerConfig - ? - 返回信息:[null]
#源数据
%{TIMESTAMP_ISO8601:time_local} \[%{WORD:string1}\] \[%{WORD:string2}\] \[%{WORD:string3}\]
参考grok正则
https://blog.youkuaiyun.com/jiankunking/article/details/67641143
https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns