这篇文章没有涉及ELK程序的安装,重点关注于以下几个方面
1,通过filebeat收集mysql-slowlog,并将slowlog的多行合并为一行
2,logstash通过grok将slowlog处理为json格式
3,通过kibana的dashboard展示数据
通过filebeat收集mysql-slowlog,并将slowlog的多行合并为一行
filebeat的配置如下
重点是以下三行
multiline.pattern: "^# User@Host: "
multiline.negate: true
multiline.match: after
指定一行的开头,及之后行的处理方式为合并到第一行之后
然后输出到logstash,要将elasticsearch部分#起来
logstash通过grok将slowlog处理为json格式
详细的分析过程已在下文中进行解析
grok正则解析mysql-slowlog_beretxj_的博客-优快云博客
我们做如下配置即可
[root@VM-20-10-centos conf.d]# cat logstash_to_elasticsearch.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats {
port => 5044
}
}
# "message" => "# User@Host: root[root] @ [118.125.0.86] Id: 13\n# Query_time: 16.470589 Lock_time: 0.000090 Rows_sent: 300000 Rows_examined: 300000\nSET timestamp=1671957812;\nSELECT * FROM class_comment;"
filter {
grok {
match => [ "message", "(?m)^#\s+User@Host:\s+%{USER:query_user}\[[^\[\]]+\]\s+@\s+\[%{IP:query_ip}\]\s*Id:\s+%{NUMBER:id:int}.*#\s+Query_time:\s+%{NUMBER:query_time:float}\s+Lock_time:\s+%{NUMBER:lock_time:float}\s+Rows_sent:\s+%{NUMBER:rows_sent:int}\s+Rows_examined:\s+%{NUMBER:rows_examined:int}.*SET\s+timestamp=%{NUMBER:timestamp};\s*%{GREEDYDATA:query}" ]
}
grok {
match => { "message" => "# Time: " }
add_tag => [ "drop" ]
tag_on_failure => []
}
if "drop" in [tags] {
drop {}
}
date {
match => ["mysql.slowlog.timestamp", "UNIX", "YYYY-MM-dd HH:mm:ss"]
target => "@timestamp"
timezone => "Asia/Shanghai"
}
ruby {
code => "event.set('[@metadata][today]', Time.at(event.get('@timestamp').to_i).localtime.strftime('%Y.%m.%d'))"
}
mutate {
remove_field => [ "message" ]
}
}
output {
elasticsearch {
hosts => ["http://10.0.20.10:9200"]
index => "mysql-slow-%{[@metadata][today]}"
document_type => "mysql-slow"
template_overwrite => true
#user => "elastic"
#password => "changeme"
}
stdout {
codec =>rubydebug
}
}
通过kibana的dashboard展示数据
可以简单的作几个图保存,并在dashboard中展示即可
因为数据比较少,所以图比较丑陋,生产中会有大量数据产生,画出的图就会比较漂亮了