ELK部署环境准备
1.机器准备
两台服务器:
ubuntu-elk-node1:Ubuntu 16.04.3/192.168.15.68
ubuntu-nginx-node2:Ubuntu 16.04.3/192.168.15.244
2.elk准备环境
-rwxrwxrwx 1 root root 27542289 Dec 26 12:48 elasticsearch-2.3.3.tar.gz* -rwxrwxrwx 1 root root 33045518 Dec 26 12:52 kibana-4.5.1-linux-x64.tar.gz* -rwxrwxrwx 1 root root 78887475 Dec 26 13:18 logstash-2.3.3.tar.gz* -rw-r--r-- 1 root root 189784266 Dec 25 16:47 jdk-8u152-linux-x64.tar.gz
3.安装Java(两服务器都需要安装)
解压到 /usr/java/jdk1.8.0_152
vim /etc/profie 在尾部添加以下内容
#JDK 1.8 export JAVA_HOME=/usr/java/jdk1.8.0_152 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib #export PATH=${JAVA_HOME}/bin:$PATH export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
立即生效配置
source /etc/profile
查看java版本
java -version
ubuntu-elk-node1:Ubuntu 16.04.3/192.168.15.68上安装
4.1.elasticsearch安装
解压elasticsearch
tar xvf elasticsearch-2.3.3.tar.gz -C /usr/local/elk/
创建elasticsearch用户
adduser elasticsearch
elasticsearch目录权限修改
chown -R elasticsearch.elasticsearch /usr/local/elk/elasticsearch-2.3.3/
elasticsearch配置文件elasticsearch.yml
cluster.name: chuck-cluster #判别节点是否是统一集群 node.name: ubuntu-elk-node1 #节点的hostname path.data: /data/es-data #数据存放路径 path.logs: /var/log/elasticsearch/ #日志路径 bootstrap.mlockall: true #锁住内存,使内存不会再swap中使用 network.host: 0.0.0.0 #允许访问的ip http.port: 9200 #端口
创建日志数据目录,并授权
mkdir -p /data/es-data mkdir -p /var/log/elasticsearch/ chown -R elasticsearch.elasticsearch /data/es-data chown -R elasticsearch.elasticsearch /var/log/elasticsearch/
启动elasticsearch 后台运行加上&
su elasticsearch -l -c "/usr/local/elk/elasticsearch-2.3.3/bin/elasticsearch -d" netstat -lntup|grep 9200 tcp6 0 0 :::9200 :::* LISTEN 1759/java
访问9200端口,会把信息显示出来
安装head插件显示索引和分片情况
/usr/local/elk/elasticsearch-2.3.3/bin/plugin install mobz/elasticsearch-head
访问页面http://192.168.15.68:9200/_plugin/head/
4.2.kibana安装
解压kibana
tar xvf kibana-4.5.1-linux-x64.tar.gz -C /usr/local/elk/
修改kibana.yml
server.port: 5601 kibana端口 server.host: "0.0.0.0" 对外服务的主机 elasticsearch.url: "http://192.168.15.68:9200" kibana.index: ".kibana 在elasticsearch中添加.kibana索引
启动kibana 后台运行加上&
/usr/local/elk/kibana-4.5.1-linux-x64/bin/kibana & netstat -lntup|grep 5601 tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 1815/node
ubuntu-nginx-node1:Ubuntu 16.04.3/192.168.15.244上安装
5.1.logstash安装
解压logstash
tar xvf logstash-2.3.3.tar.gz -C /usr/local/elk/
Nginx配置修改使用json
log_format json '{ "@timestamp": "$time_iso8601", '#ISO8601标准格式下的本地时间 '"@fields": { ' '"remote_addr": "$remote_addr", '#记录客户端IP地址 '"remote_user": "$remote_user", '#记录客户端用户名称 '"time_local": "$time_local", '#通用日志格式下的本地时间 '"request": "$request", '#记录请求的URL和HTTP协议 '"status": "$status", '#记录请求状态 '"body_bytes_sent": "$body_bytes_sent", '#发送给客户端的字节数,不包括响应头的大小; 该变量与Apache模块mod_log_config里的“%B”参数兼容 '"http_referer": "$http_referer", '#记录从哪个页面链接访问过来的 '"http_user_agent": "$http_user_agent", '#记录客户端浏览器相关信息 '"http_x_forwarded_for": "$http_x_forwarded_for", '#记录客户端IP地址 '"upstream_cache_status": "$upstream_cache_status", '# '"request_time": "$request_time", '#请求处理时间,单位为秒,精度毫秒; 从读入客户端的第一个字节开始,直到把最后一个字符发送给客户端后进行日志写入为止 '"upstream_response_time": "$upstream_response_time" } }'; access_log /usr/local/n/logs/www.test.com-access.log json;
logstash获取nginx日志推送到192.168.15.68elasticsearch
创建存放配置文件目录
mkdir /usr/local/elk/logstash-2.3.3/conf
创建nginx-www.test.com.conf配置文件
input{ file { path => "/usr/local/n/logs/www.test.com-access.log" codec => json type => "nginx-www.test.com" start_position => "beginning" } } output{ if [type] == "nginx-www.test.com" { elasticsearch{ hosts => ["192.168.15.68:9200"] index => "nginx-www.test.com-%{+YYYY.MM.dd}" } } }
启动logstash
/usr/local/elk/logstash-2.3.3/bin/logstash -f /usr/local/elk/logstash-2.3.3/conf/nginx-www.test.com.conf