ELK介绍
需求背景:
- 业务发展越来越庞大,服务器越来越多.
- 各种访问日志、应用日志、错误日志量越来越多 开发人员排查问题,需要到服务器上查日志,不方便 .
- 运营人员需要一些数据,需要我们运维到服务器上分析日志.
- 官网:https://www.elastic.co/cn/
- 中文指南https://www.gitbook.com/book/chenryn/elk-stack-guide-cn/details
- ELK Stack (5.0版本之后) Elastic Stack == (ELK Stack + Beats)
- ELK Stack包含:ElasticSearch、Logstash、Kibana
- ElasticSearch是一个搜索引擎,用来搜索、分析、存储日志。它是分布式的,也就是说可以横向扩容,可以自动发现,索引自动分片,总之很强大。
- 文档https://www.elastic.co/guide/cn/elasticsearch/guide/current/index.html
- Logstash用来采集日志,把日志解析为json格式交给ElasticSearch。
- Kibana是一个数据可视化组件,把处理后的结果通过web界面展示
- Beats在这里是一个轻量级日志采集器,其实Beats家族有5个成员
- 早期的ELK架构中使用Logstash收集、解析日志,但是Logstash对内存、cpu、io等资源消耗比较高。相比 Logstash,Beats所占系统的CPU和内存几乎可以忽略不计
- x-pack对Elastic Stack提供了安全、警报、监控、报表、图表于一身的扩展包,是收费的
ELK安装准备工作
安装es
准备三台服务器:
192.168.2.115 -- 主节点
192.168.2.116 -- 数据节点
192.168.2.117 -- 数据节点
1.角色划分:
192.168.2.115 -- 主节点 安装Kibana
192.168.2.116 -- 数据节点 安装logstash
三台虚拟机安装: elasticsearch
Ps: 以下操作三台机器都要执行
#下载rpm包
[root@root-01 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
#写一个repo文件
[root@root-03 ~]# vim /etc/yum.repos.d/elastic.repo
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
#yum 安装elasticsh
[root@root-01 ~]# yum list |grep elastic
apm-server.i686 6.1.1-1 elasticsearch-6.x
apm-server.x86_64 6.1.1-1 elasticsearch-6.x
auditbeat.i686 6.1.1-1 elasticsearch-6.x
auditbeat.x86_64 6.1.1-1 elasticsearch-6.x
elasticsearch.noarch 6.1.1-1 elasticsearch-6.x
filebeat.i686 6.1.1-1 elasticsearch-6.x
filebeat.x86_64 6.1.1-1 elasticsearch-6.x
heartbeat-elastic.i686 6.1.1-1 elasticsearch-6.x
heartbeat-elastic.x86_64 6.1.1-1 elasticsearch-6.x
kibana.x86_64 6.1.1-1 elasticsearch-6.x
logstash.noarch 1:6.1.1-1 elasticsearch-6.x
metricbeat.i686 6.1.1-1 elasticsearch-6.x
metricbeat.x86_64 6.1.1-1 elasticsearch-6.x
packetbeat.i686 6.1.1-1 elasticsearch-6.x
packetbeat.x86_64 6.1.1-1 elasticsearch-6.x
pcp-pmda-elasticsearch.x86_64 3.11.8-7.el7 base
rsyslog-elasticsearch.x86_64 8.24.0-12.el7 base
[root@root-03 ~]# yum install -y elasticsearch
三台虚拟机都安装:java-1.8.0-openjdk
安装:
[root@root-01 ~]# yum install -y java-1.8.0-openjdk
查看版本:
[root@root-01 ~]# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
[root@root-01 ~]# which java
/usr/bin/java
配置es
在主节点上操作
说明: elasticsearch配置文件:/etc/elasticsearch/elasticsearch.yml和/etc/sysconfig/elasticsearch
[root@root-01 ~]# vim /etc/elasticsearch/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
................................................................................
................................................................................
................................................................................
#cluster.name: my-application
cluster.name: Elk-application //集群的名称,可自定义
# ------------------------------------ Node ------------------------------------
................................................................................
................................................................................
................................................................................
#node.name: node-1
node.name: root-01 //节点名称
node.master: true //定义角色,是否是master,是的话写true,不是的话写false
node.data: false //定义角色,是否是数据节点,是的话写true,不是的话写false
# ---------------------------------- Network -----------------------------------
................................................................................
................................................................................
................................................................................
#network.host: 192.168.0.1 //将注释去掉,修改为:network.host: 192.168.2.115
#http.port: 9200 //将注释去掉,即可:http.port: 9200
# --------------------------------- Discovery ----------------------------------
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.unicast.hosts: ["192.168.2.115", "192.168.2.116", "192.168.2.117"]
说明:定义自动发现,可以写主机名,前提是写了hosts,可以写IP.
在数据节点上操作 --两台
[root@root-02 ~]# vim /etc/elasticsearch/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
................................................................................
................................................................................
................................................................................
#cluster.name: my-application
cluster.name:Elk-application //集群的名称,可自定义
# ------------------------------------ Node ------------------------------------
................................................................................
................................................................................
................................................................................
#node.name: node-1
node.name: root-02 //节点名称
node.master: false //定义角色,是否是master,是的话写true,不是的话写false
node.data: true //定义角色,是否是数据节点,是的话写true,不是的话写false
# ---------------------------------- Network -----------------------------------
................................................................................
................................................................................
................................................................................
#network.host: 192.168.0.1 //将注释去掉,修改为:network.host: 192.168.2.116
#http.port: 9200 //将注释去掉,即可:http.port: 9200
# --------------------------------- Discovery ----------------------------------
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.unicast.hosts: ["192.168.2.115", "192.168.2.116", "192.168.2.117"]
说明:定义自动发现,可以写主机名,前提是写了hosts,可以写IP.
[root@root-3 ~]# vim /etc/elasticsearch/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
................................................................................
................................................................................
................................................................................
#cluster.name: my-application
cluster.name:Elk-application //集群的名称,可自定义
# ------------------------------------ Node ------------------------------------
................................................................................
................................................................................
................................................................................
#node.name: node-1
node.name: root-03 //节点名称
node.master: false //定义角色,是否是master,是的话写true,不是的话写false
node.data: true //定义角色,是否是数据节点,是的话写true,不是的话写false
# ---------------------------------- Network -----------------------------------
................................................................................
................................................................................
................................................................................
#network.host: 192.168.0.1 //将注释去掉,修改为:network.host: 192.168.2.117
#http.port: 9200 //将注释去掉,即可:http.port: 9200
# --------------------------------- Discovery ----------------------------------
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.unicast.hosts: ["192.168.2.115", "192.168.2.116", "192.168.2.117"]
说明:定义自动发现,可以写主机名,前提是写了hosts,可以写IP.
启动es 服务
#启动elasticsearch
[root@root-01 ~]# systemctl start elasticsearch
#查看elasticsearch是否起来
[root@root-01 log]# ps aux |grep elasticsearch
root 4862 0.0 0.0 112664 972 pts/0 S+ 23:26 0:00 grep --color=auto elasticsearch
说明:没有elasticsearch的进程,elasticsearch的服务没有启动起来.
#排查错误
说明:因为没有定义elasticsearch的错误日志,所以只能查看messages
[root@root-01 ~]# tail -n 200 /var/log/messages
........................................
........................................
........................................
Dec 31 23:19:57 root-01 elasticsearch: Exception in thread "main" 2017-12-31 23:19:57,202 main ERROR No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'log4j2.debug' to show Log4j2 internal initialization logging.
【在线程异常“主要”2017-12-31 23:19:57202主要误差不对log4j2配置文件中找到。使用默认配置:只将错误记录到控制台。设置系统属性“log4j2。调试的显示log4j2内部初始化记录】
#根据问题原因去解决,解决方法如下:
[root@root-01 ~]# yum install -y log4j*
#重新启动elasticsearch
[root@root-01 ~]# systemctl start elasticsearch
#查进程
[[root@root-02 ~]# ps aux |grep elasticsearch
elastic+ 3321 3.6 68.9 3632996 689680 ? Ssl 23:43 0:03 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
root 3368 0.6 0.0 112664 976 pts/0 S+ 23:44 0:00 grep --color=auto elasticsearch
#查看端口:9200 9300
[root@root-01 ~]# netstat -nlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1395/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2147/sendmail: acce
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 2017/php-fpm: maste
tcp6 0 0 192.168.2.115:9200 :::* LISTEN 5402/java
tcp6 0 0 192.168.2.115:9300 :::* LISTEN 5402/java
tcp6 0 0 :::22 :::* LISTEN 1395/sshd
curl查看es
1.curl '192.168.2.115:9200/_cluster/health?pretty' ---> 健康检查
[root@root-01 ~]# curl '192.168.2.115:9200/_cluster/health?pretty'
{
"cluster_name" : "Elk-application",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 2,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
2.curl '192.168.2.115:9200/_cluster/state?pretty' --->集群详细信息
[root@root-01 ~]# curl '192.168.2.115:9200/_cluster/state?pretty'
{
"cluster_name" : "Elk-application",
"compressed_size_in_bytes" : 343,
"version" : 4,
"state_uuid" : "na1Sj50iRQ-Zz-yBJNUJUQ",
"master_node" : "LuMCh0GiR16kqB0vZ3kvbw",
"blocks" : { },
"nodes" : {
"LuMCh0GiR16kqB0vZ3kvbw" : {
"name" : "root-01",
"ephemeral_id" : "-VFAkd6WQH2-Q28-FcxvaQ",
"transport_address" : "192.168.2.115:9300",
"attributes" : { }
},
"wD7xAKG_Sn2DeR1m4K1O3A" : {
"name" : "root-02",
"ephemeral_id" : "b9BSaEicQ7GQnDO1LaPCYg",
"transport_address" : "192.168.2.116:9300",
"attributes" : { }
},
"qtCZi5gHSJyobT3714lTYA" : {
"name" : "root-03",
"ephemeral_id" : "tAsVI0oeRNSBi3aTbCYupw",
"transport_address" : "192.168.2.117:9300",
"attributes" : { }
}
},
"metadata" : {
"cluster_uuid" : "eExPC1ViTU2u3WWjwiW94A",
"templates" : { },
"indices" : { },
"index-graveyard" : {
"tombstones" : [ ]
}
},
"routing_table" : {
"indices" : { }
},
"routing_nodes" : {
"unassigned" : [ ],
"nodes" : {
"qtCZi5gHSJyobT3714lTYA" : [ ],
"wD7xAKG_Sn2DeR1m4K1O3A" : [ ]
}
},
"snapshots" : {
"snapshots" : [ ]
},
"restore" : {
"snapshots" : [ ]
},
"snapshot_deletions" : {
"snapshot_deletions" : [ ]
}
}
安装kibana
说明:Kibana是呈现图形,是Web的一个工具.
1.在主节点上安装kibana即可
[root@root-01 src]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-x86_64.rpm
[root@root-01 src]# rpm -ivh kibana-6.0.0-x86_64.rpm
2.配置kibana配置文件
[root@root-01 src]# vim /etc/kibana/kibana.yml
..................
#server.port: 5601
#server.port: 5601 //kibana默认端口是5601,把#去掉
..................
#server.host: "localhost"
server.host: "192.168.2.115" //定义绑定的IP
...................
#elasticsearch.url: "http://localhost:9200"
elasticsearch.url: "http://192.168.2.115:9200" //kibana需要和es通信,所以需要定义一下es的URL,也就是主节点
...................
#logging.dest: stdout
logging.dest:/var/log/kibana.log //定义kibana日志路径
3.启动kibana
[root@root-01 src]# systemctl start kibana
#查看进程
[root@root-01 src]# ps aux |grep kibana
kibana 2843 20.0 11.5 1123548 115684 ? Ssl 15:12 0:07 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
root 2873 2.0 0.0 112664 968 pts/0 R+ 15:13 0:00 grep --color=auto kibana
#监听端口5601
[root@root-01 src]# netstat -nvlpt |grep node
tcp 0 0 192.168.2.115:5601 0.0.0.0:* LISTEN 2843/node
4. 浏览器里访问:http://192.168.2.115:5601/
安装logstash
1.下载安装logstash
说明:在其中一个从节点上在安装(192.168.2.116)
[root@root-02 ~]# cd /usr/local/src
[root@root-02 src]# wget https://artifacts.elastic.co/downloads/logstash/logstash6.0.0.rpm
[root@root-02 src]# rpm -ivh logstash-6.0.0.rpm
2.logstash收集syslog日志
说明: input : 进入的,也就是日志源
output: 输出的,日志输出到哪里去
[root@root-02 src]# vim /etc/logstash/conf.d/syslog.conf
input {
syslog {
type => "system-syslog"
port => 10514
}
}
output {
stdout {
codec => rubydebug
}
}
3. 检测配置文件是否有错
[root@root-02 src]# cd /usr/share/logstash/bin
[root@root-02 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK ---> 说明配置文件没有问题
#检测命令: ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
4.检测是否开启10514端口
[root@root-02 bin]# vi /etc/rsyslog.conf
....................
....................
....................
#### RULES ####
*.* @@192.168.2.116:10514 //定义这行配置的意思是把所有的日志都输出给10514端口
5. 启动logstash
说明:执行启动logstash是不会退出终端的,当安装、访问或者ssh登录某个虚拟机时产生日志,会在终端上显示
[root@root-02 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
5.1 再开一个终端,重启rsyslog服务
[root@root-02 ~]# systemctl restart rsyslog
[root@root-02 ~]# netstat -nvlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1683/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1603/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2065/master
tcp 0 0 0.0.0.0:10050 0.0.0.0:* LISTEN 1418/zabbix_agentd
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 2111/php-fpm: maste
tcp6 0 0 192.168.2.116:9200 :::* LISTEN 2290/java
tcp6 0 0 :::10514 :::* LISTEN 2883/java
tcp6 0 0 192.168.2.116:9300 :::* LISTEN 2290/java
tcp6 0 0 :::22 :::* LISTEN 1603/sshd
tcp6 0 0 ::1:25 :::* LISTEN 2065/master
tcp6 0 0 127.0.0.1:9600 :::* LISTEN 2883/java
tcp6 0 0 :::10050 :::* LISTEN 1418/zabbix_agentd
5.2 ssh 到root-01
说明: 安装或是ssh 登录某台服务器都会产生日志
[root@root-02 ~]# ssh root-01
The authenticity of host 'root-01 (192.168.2.115)' can't be established.
ECDSA key fingerprint is bd:fd:56:cf:07:80:0d:a6:9c:38:00:be:33:1f:f5:7a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'root-01' (ECDSA) to the list of known hosts.
root@root-01's password:
Last login: Sun Jan 7 13:30:44 2018 from 192.168.2.206
-bash: /etc/profile: 权限不够
#可以看到登录的日志
[root@root-02 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
10514{
"severity" => 6,
"program" => "rsyslogd",
"message" => "[origin software=\"rsyslogd\" swVersion=\"7.4.7\" x-pid=\"3005\" x-info=\"http://www.rsyslog.com\"] start\n",
"type" => "system-syslog",
"priority" => 46,
"logsource" => "root-02",
"@timestamp" => 2018-01-07T08:52:46.000Z,
"@version" => "1",
"host" => "192.168.2.116",
"facility" => 5,
"severity_label" => "Informational",
"timestamp" => "Jan 7 16:52:46",
"facility_label" => "syslogd"
}
{
"severity" => 6,
"program" => "systemd",
"message" => "Stopping System Logging Service...\n",
"type" => "system-syslog",
"priority" => 30,
"logsource" => "root-02",
"@timestamp" => 2018-01-07T08:52:46.000Z,
"@version" => "1",
"host" => "192.168.2.116",
"facility" => 3,
"severity_label" => "Informational",
"timestamp" => "Jan 7 16:52:46",
"facility_label" => "system"
}
{
"severity" => 6,
"program" => "systemd",
"message" => "Starting System Logging Service...\n",
"type" => "system-syslog",
"priority" => 30,
"logsource" => "root-02",
"@timestamp" => 2018-01-07T08:52:46.000Z,
"@version" => "1",
"host" => "192.168.2.116",
"facility" => 3,
"severity_label" => "Informational",
"timestamp" => "Jan 7 16:52:46",
"facility_label" => "system"
}
{
"severity" => 5,
"pid" => "543",
"program" => "polkitd",
"message" => "Unregistered Authentication Agent for unix-process:2998:1227980 (system bus name :1.79, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale zh_CN.UTF-8) (disconnected from bus)\n",
"type" => "system-syslog",
"priority" => 85,
"logsource" => "root-02",
"@timestamp" => 2018-01-07T08:52:46.000Z,
"@version" => "1",
"host" => "192.168.2.116",
"facility" => 10,
"severity_label" => "Notice",
"timestamp" => "Jan 7 16:52:46",
"facility_label" => "security/authorization"
}
{
"severity" => 6,
"program" => "systemd",
"message" => "Started System Logging Service.\n",
"type" => "system-syslog",
"priority" => 30,
"logsource" => "root-02",
"@timestamp" => 2018-01-07T08:52:46.000Z,
"@version" => "1",
"host" => "192.168.2.116",
"facility" => 3,
"severity_label" => "Informational",
"timestamp" => "Jan 7 16:52:46",
"facility_label" => "system"
} {
"severity" => 6,
"program" => "systemd",
"message" => "Started Session 28 of user root.\n",
"type" => "system-syslog",
"priority" => 30,
"logsource" => "root-02",
"@timestamp" => 2018-01-07T09:00:01.000Z,
"@version" => "1",
"host" => "192.168.2.116",
"facility" => 3,
"severity_label" => "Informational",
"timestamp" => "Jan 7 17:00:01",
"facility_label" => "system"
}
{
"severity" => 6,
"pid" => "3016",
"program" => "CROND",
"message" => "(root) CMD (/usr/lib64/sa/sa1 1 1)\n",
"type" => "system-syslog",
"priority" => 78,
"logsource" => "root-02",
"@timestamp" => 2018-01-07T09:00:01.000Z,
"@version" => "1",
"host" => "192.168.2.116",
"facility" => 9,
"severity_label" => "Informational",
"timestamp" => "Jan 7 17:00:01",
"facility_label" => "clock"
}
{
"severity" => 6,
"program" => "systemd",
"message" => "Starting Session 28 of user root.\n",
"type" => "system-syslog",
"priority" => 30,
"logsource" => "root-02",
"@timestamp" => 2018-01-07T09:00:01.000Z,
"@version" => "1",
"host" => "192.168.2.116",
"facility" => 3,
"severity_label" => "Informational",
"timestamp" => "Jan 7 17:00:01",
"facility_label" => "system"
}
6.配置logstash
说明:后端形式启动logstash
#修改配置文件
[root@root-02 bin]# vim /etc/logstash/conf.d/syslog.conf
input {
syslog {
type => "system-syslog"
port => 10514
}
}
output {
elasticsearch {
hosts => ["192.168.2.115:9200"]
index => "system-syslog-%{+YYYY.MM}" //定义索引
}
}
#检测配置文件是否ok
[root@root-02 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
#以后台形式启动logstash
[root@root-02 bin]# systemctl start logstash
#查看logstash进程
[root@root-02 bin]# ps aux |grep logstash
logstash 3138 269 38.7 3780868 387308 ? SNsl 17:17 1:18 /bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb --path.settings /etc/logstash
root 3175 0.0 0.0 112664 972 pts/0 R+ 17:18 0:00 grep --color=auto logstash
#监听的端口有两个9600 & 10514
说明:启动需要一些时间,启动完成后,可以看到9600端口和10514端口已被监听
[root@root-02 ~]# !net
netstat -nvlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1683/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1603/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2065/master
tcp 0 0 0.0.0.0:10050 0.0.0.0:* LISTEN 1418/zabbix_agentd
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 2111/php-fpm: maste
tcp6 0 0 192.168.2.116:9200 :::* LISTEN 2290/java
tcp6 0 0 192.168.2.116:9300 :::* LISTEN 2290/java
tcp6 0 0 :::22 :::* LISTEN 1603/sshd
tcp6 0 0 ::1:25 :::* LISTEN 2065/master
tcp6 0 0 :::10050 :::* LISTEN 1418/zabbix_agentd
#启动很久都没有启动成功,设置一下日志权限即可
[root@root-02 ~]# ls -lh !$
ls -lh /var/log/logstash/logstash-plain.log
-rw-r--r-- 1 root root 3.9K 1月 7 17:15 /var/log/logstash/logstash-plain.log
[root@root-02 ~]# chown logstash /var/log/logstash/logstash-plain.log
[root@root-02 ~]# ls -lh !$
ls -lh /var/log/logstash/logstash-plain.log
-rw-r--r-- 1 logstash root 3.9K 1月 7 17:15 /var/log/logstash/logstash-plain.log
#重新启动logstash
[root@root-02 ~]# systemctl restart logstash
#9600端口和10514端口还是没有起来
[root@root-02 ~]# !net
netstat -nvlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1683/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1603/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2065/master
tcp 0 0 0.0.0.0:10050 0.0.0.0:* LISTEN 1418/zabbix_agentd
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 2111/php-fpm: maste
tcp6 0 0 192.168.2.116:9200 :::* LISTEN 2290/java
tcp6 0 0 192.168.2.116:9300 :::* LISTEN 2290/java
tcp6 0 0 :::22 :::* LISTEN 1603/sshd
tcp6 0 0 ::1:25 :::* LISTEN 2065/master
tcp6 0 0 :::10050 :::* LISTEN 1418/zabbix_agentd
#查看logstash日志
[root@root-02 ~]# tailf /var/log/logstash/logstash-plain.log
[2018-01-07T20:24:56,378][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:443:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:225:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:136:in `block in validate_all'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:135:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:280:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:232:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:71:in `<main>'"]}
说明:/var/lib/logstash/queue 属主也需要设为logstash
#解决
[root@root-02 ~]# chown logstash -R /var/lib/logstash/queue
[root@root-02 ~]# ls -ld /var/lib/logstash/queue
drwxr-xr-x 2 logstash root 6 1月 7 16:23 /var/lib/logstash/queue
#重新启动logstash
[root@root-02 ~]# systemctl restart logstash
#可以看到已经监听9600 和10514端口
[root@root-02 ~]# netstat -nvlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1683/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1603/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2065/master
tcp 0 0 0.0.0.0:10050 0.0.0.0:* LISTEN 1418/zabbix_agentd
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 2111/php-fpm: maste
tcp6 0 0 192.168.2.116:9200 :::* LISTEN 2290/java
tcp6 0 0 :::10514 :::* LISTEN 23133/java
tcp6 0 0 192.168.2.116:9300 :::* LISTEN 2290/java
tcp6 0 0 :::22 :::* LISTEN 1603/sshd
tcp6 0 0 ::1:25 :::* LISTEN 2065/master
tcp6 0 0 127.0.0.1:9600 :::* LISTEN 23133/java
tcp6 0 0 :::10050 :::* LISTEN 1418/zabbix_agentd
温馨提示:若一直启动不了端口,查看tail /var/log/logstash/logstash-plain.log ,一般是权限问题