logstash简介
Logstash是一个开源数据收集引擎,具有实时管道功能。Logstash可以动态地将来自不同数据源的数据统一起来,并将数据标准化到你所选择的目的地。
官网:
https://www.elastic.co/guide/en/logstash/7.6/plugins-outputs-file.html
logstash数据采集
安装
rpm -ivh jdk-8u171-linux-x64.rpm
rpm -ivh logstash-7.6.1.rpm
日志采集输出插件
命令行数据采集并显示到终端
ln -s /usr/share/logstash/bin/logstash /usr/local/bin #创价暖链接
logstash -e 'input { stdin {}} output { stdout {} }'
输入数据会立刻在终端以标准输出的方式反馈
以文件的形式运行,设定文件输出路径为/tmp/testfile
cd /etc/logstash/conf.d/
vim test.conf
input {
stdin {}
}
output {
file {
path => "/tmp/testfile"
codec => line { format => "custom format: %{message}"}
}
}
file输出插件 -f指定文件
logstash -f test.conf
查看
cat /tmp/testfile
logstash连接到elasticsearch
vim es.conf
input {
stdin {}
}
output {
stdout {}
elasticsearch {
hosts => ["172.25.0.3:9200"]
index => "logstash-%{+yyyy.MM.dd}"
}
}
logstash -f test.conf
随意输入,输入结束后ctrl+c结束数据采集
进入监控查看采集的数据浏览
采集本机日志输出到elasticsearch
input {
file {
path => "/var/log/messages"
start_position => "beginning"
}
}
output {
stdout {}
elasticsearch {
hosts => ["172.25.76.1:9200"]
index => "logstash-%{+yyyy.MM.dd}"
}
}
logstash -f es.conf
重复采集同样的日志,只会生效一次,是因为生成了sincedb文件
sincedb文件所在目录
cd /usr/share/logstash/data/plugins/inputs/file/
l.
想要重新生成索引,就必须删除它
rm -f .sincedb_*
删除后生成日志的索引
采集系统日志syslog输入插件
待收集主机:
vim /etc/rsyslog.conf
# 打开tcp 514端口
$ModLoad imtcp
$InputTCPServerRun 514
# 接受来自指定ip 514端口的信息采集
*.* @@172.25.9.7:514
重启服务
systemctl restart rsyslog.service
input {
# file {
# path => "/var/log/messages"
# start_position => "beginning"
# }
syslog {
port => 514
}
}
output {
stdout {}
elasticsearch {
hosts => ["172.25.76.4:9200"]
index => "syslog-%{+yyyy.MM.dd}"
}
}
logstash -f es.conf
监控查看数据
多行过滤插件
读取集群日志,多行过滤插件
将server1的my-es.log 文件传给server7的/var/log/目录中
scp /var/log/elasticsearch/my-es.log server4:/var/log/
input {
file {
path => "/var/log/my-es.log"
start_position => "beginning"
codec => multiline {
pattern => "^\[" #以[开头的
negate => "true"
what => "previous"
}
}
}
output {
stdout {}
elasticsearch {
hosts => ["172.25.76.1:9200"]
index => "eslog-%{+yyyy.MM.dd}"
}
}
logstash -f test.conf
查看监控
grok过滤插件
切片apache访问日志
vim grok.conf
input {
file {
path => "/var/log/httpd/access_log"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{HTTPD_COMBINEDLOG}" }
}
}
output {
stdout {}
elasticsearch {
hosts => ["172.25.76.1:9200"]
index => "apachelog-%{+yyyy.MM.dd}"
}
}
固定服务变量模板以准备好,变量所在目录:
cd /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns
在server4安装apache
yum install -y httpd
echo www.westos.org > /var/www/html/index.html
systemctl start httpd
chmod 755 /var/log/httpd
多访问几次该服务
curl 172.25.76.4
logstash -f grok.conf #先执行这个 在curl
监控查看
node.data: false
node.ingest: false
node.ml: false