日志系统kibana

博客介绍了使用filebeat、kafka、logstash、elasticsearch和kibana搭建系统的过程,包括各组件的配置文件和运行命令。同时,针对elasticsearch常见问题,如文件描述符、线程数和虚拟内存区域限制等,给出了修改配置文件的解决办法,还提供了各组件资源下载地址。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

filebeat+kafka+logstash+elasticsearch+kibana

java环境:

tar -zxf jdk-8u151-linux-x64.tar.gz
mv jdk1.8.0_151 /usr/local/
vim /etc/profile
export JAVA_HOME=/usr/local/jdk1.8.0_151
export JAVA_BIN=/usr/local/jdk1.8.0_151/bin
export PATH=$PATH:$JAVA_BIN
export CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME JAVA_BIN PATH CLASSPATH
source /etc/profile

filebeat:

    配置filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /home/log/* #日志路径

  ignore_older: 2h
  max_bytes: 1048576
  scan_frequency: 20s
  close_inactive: 3m
  backoff: 3s
  clean_inactive: 72h
  exclude_files: ['.log$', '.tgz$', '.gz$', '.txt$']

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
#定义模板的相关信息
setup.template.name: "shirolog-gameserver"
setup.template.pattern: "shirolog-gameserver-*"
setup.template.overwrite: true
setup.template.enabled: true
setup.ilm.enabled: false

output.kafka:
  #initial brokers for reading cluster metadata
  hosts: ["localhost:9092"]

  #message topic selection + partitioning
  topic: 'test'
  partition.round_robin:
    reachable_only: false

  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000

运行命令:

./filebeat -e -c beat-kafka.yml -d "publish"
nohup ./filebeat -e -c filebeat.yml > nohup.out 2>&1 &
#配置filebeat自定义Service启动
#如果是软件包安装的可以配置这个来启动,直接用&后台运行会自己关闭	
#/exec/exec/filebeat   这个目录是文件目录
#vim /usr/lib/systemd/system/filebeat.service
#systemctl daemon-reload  刷新启动项
#systemctl enable filebeat  开机启动
#systemctl start filebeat    启动filebeat

[Unit]
Description=Filebeat is a lightweight shipper for metrics.
Documentation=https://www.elastic.co/products/beats/filebeat
Wants=network-online.target
After=network-online.target
 
[Service]	
Environment="LOG_OPTS=-e"
Environment="CONFIG_OPTS=-c /exec/exec/filebeat/filebeat.yml"
Environment="PATH_OPTS=-path.home /exec/exec/filebeat/filebeat -path.config /exec/exec/filebeat/filebeat -path.data /exec/exec/filebeat/data -path.logs /exec/exec/filebeat/logs"
ExecStart=/exec/exec/filebeat/filebeat $LOG_OPTS $CONFIG_OPTS $PATH_OPTS
Restart=always
 
[Install]
WantedBy=multi-user.target

kafka:

运行命令:

bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties

创建一个消息消费者(接收消息):

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning

kafka文档

logstash:

配置文件first-pipeline.conf

input {
    kafka {
        bootstrap_servers => "localhost:9092"
        topics => "test"
        group_id => "ttt"
    }
}
output {
    elasticsearch {
        hosts => "localhost:9200"
        index => "test_index"
    }
}

运行命令:

bin/logstash -f config/first-pipeline.conf --config.reload.automatic --path.data=/logstash-7.13.0/logs

--path.data=/logstash-7.13.0/logs是指存放数据的路径

elasticsearch:

配置文件elasticsearch.yml

cluster.name: my-application #集群名称
node.name: node-1 #节点名称
#数据和日志的存储目录
path.data: /elasticsearch-7.13.0/logs/elasticsearch
path.logs: /elasticsearch-7.13.0/logs/elasticsearch
#设置绑定的ip,设置为0.0.0.0以后就可以让任何计算机节点访问到了
network.host: localhost
http.port: 9200  #端口
#设置在集群中的所有节点名称,这个节点名称就是之前所修改的,当然你也可以采用默认的也行,目前是单机,放入一个节点即可
cluster.initial_master_nodes: ["node-1"]

运行命令:

sh bin/elasticsearch
#后台执行
sh bin/elasticsearch -d 

配置文件jvm.options

#这个配置根据自身的数据量和服务器配置来配置
-Xms500m
-Xmx500m

kibana:

配置文件kibana.yml

server.port: 5601
server.host: localhost
elasticsearch.hosts: ["http://localhost:9200"]

运行命令:

./kibana

elasticsearch常见问题

ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

说明:

  1. max file descriptors [4096] for elasticsearch process is too low,

     increase to at least [65536]

     每个进程最大同时打开文件数太小

     修改/etc/security/limits.conf文件,增加配置,用户退出后重新登录生效

     2、max number of threads [3818] for user [es] is too low, increase to at least [4096]

    同上,最大线程个数太低。修改配置文件/etc/security/limits.conf,增加

*               soft    nofile          65536
*               hard    nofile          65536
*               soft    nproc           4096
*               hard    nproc           4096

3.max virtual memory areas vm.max_map_count [65530] is too low,

 increase to at least [262144]

修改/etc/sysctl.conf文件,增加配置vm.max_map_count=262144

echo ' vm.max_map_count=262144' >>/etc/sysctl.conf
#/etc/sysctl.conf生效
/sbin/sysctl -p
  • 增加pipline,然后重启生效
curl -X PUT "localhost:9200/_ingest/pipeline/New-pipline?pretty" -H 'Content-Type: application/json' -d'
{
  "description" : "New-pipeline",
  "processors" : [
    {
      "grok" :{
        "field" : "message",
        "patterns" : ["(?<datetime>(?>\\d\\d){1,2}[- ]%{MONTHNUM}[- ]%{MONTHDAY}[- ]%{TIME}) %{DATA:processname} %{DATA:servername}((\\s)?(\\s))%{LOGLEVEL:level} (?<message>([\\S+]*)\\s?.*\\S+)"]
      }
    }
  ]
}
'

测试elasticsearch

curl 'localhost:9200/_cat/indices?v'
curl -XGET 'localhost:9200/test_index/_search?pretty'

资源下载地址:

filebeat

kafka

logstash

elasticsearch

kibana

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值