部署EFK
基础部分
系统架构
主机规划
主机地址 | 主机名称 | CPU/MEM | 主机用途 |
---|---|---|---|
192.168.23.128 | elk-node-01 | 2C/4G | Elasticsearch+Logstash+Nginx |
192.168.23.129 | elk-node-02 | 2C/4G | Elasticsearch+Logstash+Tomcat |
192.168.23.130 | elk-node-03 | 2C/4G | Elasticsearch+Kibana |
基础配置
- 设置主机名称
#各个主机单独执行命令
hostnamectl set-hostname elk-node-01
hostnamectl set-hostname elk-node-02
hostnamectl set-hostname elk-node-03
- 关闭防火墙
#所有节点都要执行(可以选择关闭防火墙,也可以选择添加防火墙策略)
systemctl stop firewalld
systemctl disable firewalld
- 关闭selinux
#所有节点都要执行
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
- 配置yum源
#所有节点都要执行
wget -O /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache
- 优化sshd服务
#所有节点都要执行
sed -ri 's/^#UseDNS yes/UseDNS no/g' /etc/ssh/sshd_config
sed -ri 's/^GSSAPIAuthentication yes/GSSAPIAuthentication no/g' /etc/ssh/sshd_config
- 集群时间同步
#所有节点都要执行
yum -y install ntpdate chrony
sed -ri 's/^server/#server/g' /etc/chrony.conf
echo "server ntp.aliyun.com iburst" >> /etc/chrony.conf
systemctl start chronyd
systemctl enable chronyd
#显示chronyd服务正在使用的NTP源服务器的详细状态
chronyc sourcestats -v
Elasticsearch部署
注意:涉及节点有192.168.23.128、192.168.23.129、192.168.23.130
安装
- 下载rpm包
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.21-x86_64.rpm
- 安装Elasticsearch
yum -y localinstall elasticsearch-7.17.21-x86_64.rpm
配置
- 编辑elasticsearch.yml文件-192.168.23.128
mv /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml_`date +%F`
cat > /etc/elasticsearch/elasticsearch.yml << EOF
cluster.name: ELK-learning
node.name: elk-node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.23.128
http.port: 9200
discovery.seed_hosts: ["192.168.23.128", "192.168.23.129", "192.168.23.130"]
cluster.initial_master_nodes: ["192.168.23.128", "192.168.23.129", "192.168.23.130"]
EOF
- 编辑elasticsearch.yml文件-192.168.23.129
mv /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml_`date +%F`
cat > /etc/elasticsearch/elasticsearch.yml << EOF
cluster.name: ELK-learning
node.name: elk-node-2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.23.129
http.port: 9200
discovery.seed_hosts: ["192.168.23.128", "192.168.23.129", "192.168.23.130"]
cluster.initial_master_nodes: ["192.168.23.128", "192.168.23.129", "192.168.23.130"]
EOF
- 编辑elasticsearch.yml文件-192.168.23.130
mv /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml_`date +%F`
cat > /etc/elasticsearch/elasticsearch.yml << EOF
cluster.name: ELK-learning
node.name: elk-node-3
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.23.130
http.port: 9200
discovery.seed_hosts: ["192.168.23.128", "192.168.23.129", "192.168.23.130"]
cluster.initial_master_nodes: ["192.168.23.128", "192.168.23.129", "192.168.23.130"]
EOF
启动
注意:Elasticsearch启动较慢,需要耐心等待
systemctl start elasticsearch
systemctl enable elasticsearch
验证
- 查看端口
ss -ntl
- 查看服务
systemctl status elasticsearch
- 验证服务
curl -iv 192.168.23.128:9200
curl -iv 192.168.23.129:9200
curl -iv 192.168.23.130:9200
Kibana部署
注意:涉及主机192.168.23.130
安装
- 下载rpm包
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.17.22-x86_64.rpm
- 安装Kibana
yum -y localinstall kibana-7.17.22-x86_64.rpm
配置
mv /etc/kibana/kibana.yml /etc/kibana/kibana.yml_`date +%F`
cat > /etc/kibana/kibana.yml << EOF
server.port: 5601
server.host: "192.168.23.130"
server.maxPayload: 1048576
server.name: "ELK-Kibana"
elasticsearch.hosts: ["http://192.168.23.128:9200", "http://192.168.23.129:9200", "http://192.168.23.130:9200"]
i18n.locale: "zh-CN"
EOF
启动
systemctl start kibana
systemctl enable kibana
验证
- 查看端口
ss -ntl
- 查看服务
systemctl status kibana
- 验证服务
curl -iv http://192.168.23.130:5601
Logstash部署
安装
- 下载rpm包
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.17.22-x86_64.rpm
- 安装Logstash
yum -y localinstall logstash-7.17.22-x86_64.rpm
配置
- logstash.yaml配置
mv /etc/logstash/logstash.yml /etc/logstash/logstash.yml_`date +%F`
cat > /etc/logstash/logstash.yml << EOF
#设置节点名称,一般使用主机名
node.name: elk-node-01
#logstash存储插件等数据目录
path.data: /var/lib/logstash
#Logstash监听地址
http.host: "0.0.0.0"
#Logstash监听端口,可以是单个端口,也可以是范围端口,默认9600-9700
http.port: 9600
#Logstash日志存放路径
path.logs: /var/log/logstash
EOF
启动
#配置环境变量
cat >> /etc/profile << EOF
export PATH=$PATH:/usr/share/logstash/bin
EOF
source /etc/profile
验证
- 查看端口
ss -ntl
- 验证Logstash
cat > /etc/logstash/logstash-stdin-to-console.conf << EOF
input {
stdin {}
}
output {
stdout {}
}
EOF
logstash -f /etc/logstash/logstash-stdin-to-console.conf
- 验证Logstash
curl -iv 127.0.0.1:9600
ELK日志采集
Nginx环境
注意:涉及主机192.168.23.128
安装Nginx
useradd nginx
wget https://nginx.org/download/nginx-1.22.1.tar.gz
tar -zxvf nginx-1.22.1.tar.gz -C /usr/local/
cd /usr/local/nginx-1.22.1
./configure --prefix=/usr/local/nginx \
--user=nginx --group=nginx \
--with-http_realip_module \
--with-http_v2_module \
--with-http_stub_status_module \
--with-http_ssl_module \
--with-http_gzip_static_module \
--with-stream \
--with-stream_ssl_module \
--with-http_sub_module
chown -R nginx:nginx /usr/local/nginx
配置Nginx
mkdir -p /data/html/
cat > /data/html/index.html << EOF
test learning for EFK
EFK 的学习测试
EOF
chown -R nginx:nginx /data/html/
cat > /usr/local/nginx/conf/nginx.conf << 'EOF'
worker_processes 1;
error_log logs/error.log info;
pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
charset utf-8;
location / {
root /data/html/;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
EOF
启动Nginx
setcap cap_net_bind_service=+ep /usr/local/nginx/sbin/nginx
su - nginx
/usr/local/nginx/sbin/nginx -t
/usr/local/nginx/sbin/nginx
日志采集-原生日志
logstash.conf配置
cat > /etc/logstash/conf.d/file-to-logstash.conf << EOF
input {
file {
path => ["/usr/local/nginx/logs/error.log*"]
start_position => "beginning"
}
file {
path => ["/usr/local/nginx/logs/access.log*"]
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["http://192.168.23.128:9200", "http://192.168.23.129:9200", "http://192.168.23.128:9200"]
index => "app-logs-%{+YYYY.MM.dd}"
}
}
EOF
启动Logstash
rm -rf /var/lib/logstash/*
systemctl start logstash
日志采集-grok插件
logstash.conf配置
cat > /etc/logstash/conf.d/file-to-logstash.conf << EOF
#采集日志文件
input {
file {
path => ["/usr/local/nginx/logs/error.log*"]
start_position => "beginning"
}
file {
path => ["/usr/local/nginx/logs/access.log*"]
start_position => "beginning"
}
}
#过滤
filter {
#用于解析和结构化文本数据
grok {
#指定要解析的日志字段和匹配的模式
match => {
#针对message字段的解析
"message" => "%{COMBINEDAPACHELOG}"
}
}
}
#发送到Elasticsearch
output {
elasticsearch {
hosts => ["http://192.168.23.128:9200", "http://192.168.23.129:9200", "http://192.168.23.128:9200"]
index => "app-logs-%{+YYYY.MM.dd}"
}
}
EOF
启动Logstash
注意:加载数据比较慢,需要等待一分钟左右
rm -rf /var/lib/logstash/*
systemctl start logstash