Filebeat插件收集日志

 下载完成后使用命令将其解压:

tar -zxvf filebeat-6.6.0-linux-x86_64.tar.gz

2. 配置Filebeat  进入解压目录

  [root@cenos7~] cd filebeat-7.6.2-linux-x86_64/filebeat.yml

#=========================== Filebeat inputs =============================
 
filebeat.inputs:
 
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
 
- type: log
 
  # Change to true to enable this input configuration.
  enabled: true
 
  # Paths that should be crawled and fetched. Glob based paths.
  # 配置要收集日志文件路径
  paths:
    - /usr/local/test_app/log/test_app.log
  encoding: utf-8
    #- c:\programdata\elasticsearch\logs\*
 
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  exclude_lines: ['DEBUG','INFO'] #不收集 'DEBUG','INFO' 级别日志
 
  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  include_lines: ['ERROR','Exception','^WARN'] #收集 'ERROR','Exception', 'WARN' 级别日志
 
  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']
 
......
 
 
#================================ General =====================================
 
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
 
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
 
# Optional fields that you can specify to add additional information to the
# output.
fields:
  appname: oa_test  # key-value 添加额外字段 收集到日志中
fields_under_root: true
 
......
#============================== Kibana =====================================
 
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
 
  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"
 
  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:
 
#============================= Elastic Cloud ==================================
 
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
 
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
 
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
 
#================================ Outputs =====================================
 
# Configure what output to use when sending the data collected by the beat.
 
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  # hosts: ["localhost:9200"]
 
  # Protocol - either `http` (default) or `https`.
  #protocol: "https"
 
  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"
 
#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts 配置你自己的Logstash  ip地址
  hosts: ["127.0.0.1:5044","127.0.0.1:5044"]
 
  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
 
  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"
 
  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"
  
......
  
#================================= Migration ==================================
 
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

配置完成以后就可以启动filebeat。 

 # 前台启动

 [root@cenos7 filebeat-7.6.2-linux-x86_64] ./filebeat -e -c filebeat.yml 

 # 后台启动
 [root@cenos7 filebeat-7.6.2-linux-x86_64] nohup ./filebeat -e -c filebeat.yml > logs/filebeat.log 2>&1 &  

更多配置:

filebeat.inputs: # filebeat input输入
- type: stdin    # 标准输入
  enabled: true  # 启用标准输入
  paths:
    - /opt/elk/logs/*.log
  tags: ["web", "test"]  #添加自定义tag,便于后续的处理
  fields:  #添加自定义字段
    from: web-test
  fields_under_root: true #true为添加到根节点,false为添加到子节点中
setup.template.settings: 
  index.number_of_shards: 3 # 指定下载数
output.console:  # 控制台输出
  hosts: ["127.0.0.1:5044","127.0.0.1:5044"]
  pretty: true   # 启用美化功能
  enable: true
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

基础配置: 

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/*.log
    - /var/path2/*.log
output.console:  # 控制台输出
  hosts: ["127.0.0.1:5044","127.0.0.1:5044"]
### Filebeat 发送日志到 Logstash 的配置示例 Filebeat 是一个轻量级的日志转发工具,可以将日志数据发送到 Logstash 或 Elasticsearch。以下是一个完整的配置示例,展示如何将 Filebeat 收集日志传递给 Logstash 进行处理。 #### Filebeat 配置文件 Filebeat 的配置文件通常位于 `/etc/filebeat/filebeat.yml`。以下是配置文件的关键部分: ```yaml filebeat.inputs: - type: log paths: - /var/log/*.log fields: log_type: application output.logstash: hosts: ["localhost:5044"] ``` 上述配置中: - `filebeat.inputs` 定义了日志的输入源,这里指定了 `/var/log/` 目录下的所有 `.log` 文件[^1]。 - `fields` 添加自定义字段,例如 `log_type`,用于标识日志类型。 - `output.logstash` 指定了 Logstash 的地址和端口,确保 Logstash 在 `localhost:5044` 上监听。 #### Logstash 配置文件 Logstash 的配置文件通常位于 `/etc/logstash/conf.d/` 目录下。以下是一个示例配置文件: ```ruby input { beats { port => 5044 } } filter { if [fields][log_type] == "application" { grok { match => { "message" => "%{TIMESTAMP_ISO8601} %{LOGLEVEL:loglevel} %{GREEDYDATA:message_content}" } } date { match => [ "timestamp", "ISO8601" ] } } } output { elasticsearch { hosts => ["http://localhost:9200"] index => "logs-%{+YYYY.MM.dd}" } } ``` 上述配置中: - `input` 部分定义了 Logstash 通过 Beats 协议接收来自 Filebeat 的数据,监听端口为 `5044`[^2]。 - `filter` 部分根据 `log_type` 字段对日志进行过滤,并使用 Grok 插件解析日志内容。此外,`date` 插件用于解析时间戳。 - `output` 部分将处理后的日志数据发送到 Elasticsearch,索引名称格式为 `logs-YYYY.MM.dd`。 #### 确保服务正常运行 在完成配置后,需要启动或重启 Filebeat 和 Logstash 服务以应用更改。可以通过以下命令检查服务状态: ```bash systemctl status filebeat systemctl status logstash ``` 如果服务未启动,可以使用以下命令启动服务: ```bash systemctl start filebeat systemctl start logstash ``` #### 测试连接 为了确保 Filebeat 能够成功将日志发送到 Logstash,可以在 Logstash 日志中查找相关的连接信息。默认情况下,Logstash 的日志文件位于 `/var/log/logstash/`。 --- ### 注意事项 - 确保 Filebeat 和 Logstash 的版本兼容性[^3]。 - 如果 Logstash 运行在远程服务器上,请确保防火墙规则允许 `5044` 端口的流量。 - 在生产环境中,建议启用 SSL/TLS 加密以保护日志传输的安全性。 ---
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值