filebeat+es+kibana搭建攻略

本文介绍了如何配置filebeat的多索引和多行日志处理,以及在kibana中创建pipline进行日志预处理。通过示例展示了grok正则解析不同类型的日志,并提醒了在部署过程中es容器启动的注意事项。

1.首先多索引的filebeat.yml配置如下

并且采集的时候多行日志处理成一行:

multiline.pattern: ^\[  不以[开头的都被合并到上一行
multiline.negate: true 不匹配pattern的都合并到上一行
multiline.match: after 合并到上一行的末尾

filebeat.inputs:

- type: log

  paths:
    - /admin/logs/deviceserver.js/biz*.log
  fields:
    index: 'biz'
  multiline.pattern: ^\[
  multiline.negate: true
  multiline.match: after

- type: log
  paths:
    - /admin/logs/deviceserver.js/deviceserver*.log
  fields:
    index: 'device'
  multiline.pattern: ^\[
  multiline.negate: true
  multiline.match: after
#============================= Filebeat modules ===============================

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml

  reload.enabled: false
#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["39.108.95.91:9200"]
  indices:
    - index: "biz-%{+yyyy.MM.dd}"
      when.contains:
        fields:
          index: "biz"

    - index: "device-%{+yyyy.MM.dd}"
      when.contains:
        fields:
          index: "device"
  pipeline: "xiaoan-pipeline-id"

2.kibana需要一个pipline对日志进行预处理,如果是logstash就不要kibana来做了

PUT _ingest/pipeline/xiaoan-pipeline-id
{
	"description": "timestamp pipeline",
	"processors": [
		{
			"grok": {
				"field": "message",
				"patterns": ["\\[%{TIMESTAMP_ISO8601:timestamp}\\] \\[%{LOGLEVEL:level}\\] \\[%{DATA:worker}\\]\\[%{DATA:logtype}\\] - %{GREEDYDATA:message}"]
			}
		},
		{
			"date": {
				"field": "timestamp",
				"formats": [
					"yyyy-MM-dd HH:mm:ss.SSS"
				],
				"timezone": "Asia/Shanghai"
			},
			"remove": {
				"field": "timestamp"
			}
		}
	],
	"on_failure": [
		{
			"set": {
				"field": "_index",
				"value": "failed-{{ _index }}"
			}
		}
	]
}

此接口可以新增和修改pipline

这里面的grok正则解析不能写错,需要借助kibana的dev tools下的grok debugger来验证,下面给了一些测试用例帮助熟悉它的语法

日志一:
localhost GET /v2/applink/5c2f4bb3e9fda1234edc64d 400 46ms 5bc6e716b5d6cb35fc9687c0
解析一:

%{WORD:environment} %{WORD:method} %{URIPATH:url} %{NUMBER:response_status} %{WORD:response_time} %{USERNAME:user_id}

日志二:
[10/Jul/2018:11:13:55 +0800] 10.242.169.166 "-" "Jakarta Commons-HttpClient/3.0" "POST /message/entry HTTP/1.0" 200 <13968> 0.061638
解析二:

\[%{HTTPDATE:timestamp}\] %{IP:remote} \"%{DATA:referer}\" \"%{DATA:ua}\" \"%{DATA:status_line}\" %{NUMBER:status_code:int} <%{NUMBER:process_id:int}> %{NUMBER:use_time:float}


日志三:
[2020-05-19 00:00:10.734] [INFO] [deviceserver.js][info] - [XIAOLIU] device saveDeviceInfo 860922042404644 {"topic":"p1/up/860922042404644","message":{"gps":[{"agl":0,"asl":0,"lat":0,"lng":0,"sat":0,"spd":0,"ts":1589817608000}],"did":"860922042404644","batLck":0,"batSN":"","batQ":0,"batV":0,"devV":378,"vib":0,"dst":11005,"slt":10090,"psts":1,"pw":0,"acc":0,"mile":6324885,"pa":-2328,"cmd":301,"ext":0,"ts":1589817608000,"seq":831,"soh":0,"sig":"4A78C7B411F7AC218AFA852CA8820261"},"prefix":"p1","channel":"up","did":"860922042404644","cmd":301}

解析三:
 

\[%{TIMESTAMP_ISO8601:timestamp}\] \[%{LOGLEVEL:level}\] \[%{DATA:worker}\]\[%{DATA:logtype}\] - %{GREEDYDATA:message}


正则表达式写好后放入到es命令中,patterns还需要\改成\\

 

es容器启动需要新用户如elasticsearch来启动,非常容易失败,docker-compose run something bash或者不要bash来定位问题

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值