提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档
背景
今天收到需求,生产环境中通需要优化filebeat的输出,将不同的内容输出到kafka不同的topic中,没看过上集的兄弟可以去看上集
filebeat主体配置
这次filebeat主体部分也是参考 File beat官方文档 配置如下↓
filebeat.inputs:
- type: log
paths:
- /data/filebeat-data/*.log
processors:
- add_fields:
target: ""
fields:
log_type: "bizlog"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*.log
processors:
- add_fields:
target: ""
fields:
log_type: "bizlog"
#output.elasticsearch:
# hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
# username: ${ELASTICSEARCH_USERNAME}
# password: ${ELASTICSEARCH_PASSWORD}
output.kafka:
hosts: ['${KAFKA_HOST:kafka}:${KAFKA_PORT:9092}']
topic: log_topic_all
topics:
- topic: "bizlog-%{[agent.version]}"
when.contains:
log_type: "bizlog"
- topic: "k8slog-%{[agent.version]}"
when.contains:
log_type: "k8slog"
---
filebeat.inputs部分
我们在input中添加了processors
模块,它可以将自定义的标签注入到输出文档中
processors:
- add_fields:
target: "" # target为空可以让文档直接加入根节点
fields:
log_type: "bizlo