logstash java 日志过滤拆分规则
input {
kafka{
bootstrap_servers => "127.0.0.1:9092"
auto_offset_reset => "latest"
topics_pattern => "index.*"
consumer_threads => 5
codec => "json"
}
}
filter {
grok {
match => { "message" => "(?m)^%{TIMESTAMP_ISO8601:createTime}%{SPACE}#%{IP:ip_address}#%{SPACE}#%{DATA:service}#%{SPACE}%{DATA}%{SPACE}\[%{DATA:thread}\]%{SPACE}%{LOGLEVEL:level}%{SPACE}%{DATA:logger}%{SPACE}-%{SPACE}(?<msg>.*)" }
}
mutate {
remove_field => ["message"]
}
if [msg] =~ /FeignRequestInterceptor/ {
grok {
match => { "msg" => "(?m).*请求方式 : %{WORD:http_method}.*访问地址 : %{URIPATHPARAM:uri_path}.*请求参数 : (?<uri_param>[A-Za-z0-9$.+!*'|(){},~@#%&/=:;_?\-\[\]<>]*).*\{current_id=\[%{NUMBER:uid}\].*traceId=\[%{DATA:traceId}\].*" }
add_field => { "log_type" => "feign" }
remove_field => ["msg"]
}
}
}
filter {
if [msg] =~ /ControllerRequestLog/ {
grok {
match => { "msg" => "(?m).*URL:\[%{WORD:http_method}-%{DATA:uri_path}\]\$Method.*?\.%{WORD:method}\]\$Args:\[%{DATA:args}\]$" }
add_field => { "log_type" => "controller" }
remove_field => ["msg"]
}
}
}
output {
elasticsearch{
hosts => "127.0.0.1:9200"
index => "%{service}"
}
stdout {
codec => rubydebug
}
该文章详细描述了使用Logstash从Kafka中读取以JSON编码的日志,然后通过Grok过滤器解析特定格式的Java日志,区分Feign请求和Controller请求。解析后的数据进一步处理并添加自定义字段,最后将结果发送到Elasticsearch进行存储和索引。
2459

被折叠的 条评论
为什么被折叠?



