架构
本架构中使用了Redis作为缓存,由Filebeat从客户端收集日志发送到Redis服务,Logstash再从Redis读出,存入Elasticsearch中
Redis在其中起到临时缓存的作用。当Logstash挂掉时不至于造成日志的丢失。
另外,只有Filebeat运行在需要收集日志的服务器上,其他所有服务都运行在同一台服务上,即日志服务器。
Kibana
安装
# wget --quiet https://artifacts.elastic.co/downloads/kibana/kibana-5.4.1-x86_64.rpm
# yum localinstall -y kibana-5.4.1-x86_64.rpm
# chkconfig kibana on
# service kibana start
Elasticsearch
# wget --quiet https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.4.1.rpm
# yum localinstall -y elasticsearch
# chkconfig elasticsearch on
# service elasticsearch start
如果需要修改elasticsearch默认的一些
文件
(如“DATA_DIR”)存储的位置,需修改elasticsearch的启动脚本:/etc/init.d/elasticsearch
,修改配置文件:/etc/elasticsearch/elasticsearch.yml
不生效:
# vim /etc/init.d/elasticsearch
...
34 # Sets the default values for elasticsearch variables used in this script
35 ES_USER="elasticsearch"
36 ES_GROUP="elasticsearch"
37 ES_HOME="/usr/share/elasticsearch"
38 MAX_OPEN_FILES=65536
39 MAX_MAP_COUNT=262144
40 LOG_DIR="/var/log/elasticsearch"
41 #DATA_DIR="/var/lib/elasticsearch"
42 DATA_DIR="/data/elk_data/elasticsearch"
43 CONF_DIR="/etc/elasticsearch"
44
45 PID_DIR="/var/run/elasticsearch"
46
...
如需要,重启elasticsearch
# service elasticsearch start
Redis
安装
# yum install -y redis
# chkconfig redis on
# service redis start
修改redis监听地址
# vim /etc/redis.conf
...
29 #
30 # bind 127.0.0.1
31 bind 172.16.1.7
32
...
重启Redis服务
# service redis restart
Logstash
安装
# wget --quiet https://artifacts.elastic.co/downloads/logstash/logstash-5.4.1.rpm
# yum localinstall -y logstash-5.4.1.rpm
默认,logstash安装在/usr/share/logstash
目录下
创建配置文件
# vim /etc/logstash/conf.d/redis-xxx-web.conf
input {
redis {
data_type => "pattern_channel"
#data_type => "list"
key => "xxx-web"
host => "172.16.1.7"
port => 6379
threads => 5
}
}
filter {}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["localhost:9200"]
index => "xxx-web-%{+YYYY.MM.dd}" ## 对应Kibana中创建“Index Patterns”时的值,如:xxx-web-*
}
}
启动logstash
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-xxx-web.conf
如果需要启动多个logstash实例,则需要在启动命令中加上参数:
--path.data "/data/elk_data/logstash/xxx-web"
。
Filebeat
安装
与上面其他服务不同,该服务是安装在需要收集日志的服务器上。
# wget --quiet https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.4.2-x86_64.rpm
# yum localinstall -y filebeat-5.4.2-x86_64.rpm
# chkconfig filebeat on
# service filebeat start
修改配置文件,指定需要收集的日志及将日志输出到Redis
# grep -v -E '^#|^$| #' /etc/filebeat/filebeat.yml
filebeat.prospectors:
- input_type: log
paths:
- /var/log/nginx/*.log
exclude_files: [".gz$"]
- input_type: log
paths:
- /xxx/tomcat/logs/catalina.2017*.out
multiline.pattern: '^[[:alpha:]]|^[[:space:]]'
multiline.negate: false
multiline.match: after
multiline.timeout: 10s
- input_type: log
paths:
- /xxx/tomcat/logs/localhost_access_log.2017*.txt
output.redis:
enabled: true
hosts: ["172.16.1.7:6379"]
key: xxx-web ## 需留意该值,对应logstash配置("/etc/logstash/conf.d/redis-xxx-web.conf")中的"key"。
datatype: channel
worker: 50
timeout: 10s
path.data: /data/elk_data/filebeat
logging.to_files: true
logging.files:
参考官方文章
logstash: plugins-inputs-redis
logstash: plugins-outputs-elasticsearch
https://www.elastic.co/guide/index.html
end!
各服务安装/配置部分完成,Kibana页面配置略。