目的:
之前分析NGINX日志通过将日志导出到一个目录,然后通过sqlldr+SHELL脚本的方式,将日志数据解析导入ORACLE数据库,数据量少的时候还可以接受,数量上来后导入和检索都有问题,因此,引入ELK,
工作原理
1,
logstash作为日志收集工具,定时将NGINX日志插入到REDIS中;同时将REDIS中的日志进行解析处理,传递给;
2,Elasticserach接收logstash处理解析后的日志,进行处理和分析;
3,
Kibana调用Elasticserach对日志进行报表展示,
安装准备
1,创建用户:
useradd log_user;
passwd log_user;
2,搭建服务器环境:
192.168.1.254 安装 NGINX+ logstash 作为日志收集agent,
192.158.1.253 安装 REDIS+ logstash_indexer+Elasticserach+kibana
Redis安装
1,下载Redis:wget
http://download.redis.io/releases/redis-3.0.7.tar.gz;
2,解压缩:tar xzf
redis-3.0.7.tar.gz;
3,cd redis-3.0.7
4,make编译;
5,进行相关配置
Elasticserach安装
1,从https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.tar.gz地址下载安装包;
2, tar -xzvf elasticsearch-1.7.1.tar.gz;
3,进入bin目录,执行elasticsearch start -d命令,启动,默认端口9200
4,测试是否安装成:curl -X GEThttp://localhost:9200
遇到的问题:
操作系统默认安装的是JDK1.6,elasticsearch1.2以后只支持JDK1.7以上版本,因此需要安装JKD1.7;
(1),下载
jdk-7u79-linux-x64.gz程序包,解压缩,重命名为JAVA7
(2),在当前用户根目录下修改
.bashrc文件添加环境变量
#JAVA_HOMEJAVA_HOME=/home/log_user/java7export JAVA_HOME
#CLASS_PATHCLASS_PATH=$JAVA_HOME/lib/*.jar:$JAVA_HOM/jre/lib/*.jarexport CLASS_PATHexport LANG=zh_CN.GBKexport LANGUAGE=zh_CN.GB18030:zh_CN.GB2312:zh_CN
2,解压缩;
3,通过 bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'检查是否成功
Logstash配置
192.168.1.254:
1,创建etc配置路径;
2,创建配置文件
logstash-agent.conf
内容如下:
input {file {type => "nginx access log"path => ["/home/info/nginx/logs/shq_server.log"]}}output {redis {host => "192.168.1.253" #redis serverport => "21031"data_type => "list"key => "logstash:redis"}3,启动 ./logstash -f ../etc/logstash-agent.conf}
192.168.1.253:
1,创建etc配置路径;
2,创建配置文件
logstash_indexer.conf
内容如下:
input {redis {host => "192.168.1.253"port => "21031"data_type => "list"key => "logstash:redis"type => "redis-input"}}filter {ruby {init => "@kname = ['http_x_forwarded_for','time_local','request','status','body_bytes_sent','request_body','content_length','http_referer','http_user_agent','http_cookie','remote_addr','hostname','upstream_addr','upstream_response_time','request_time']"code => "event.append(Hash[@kname.zip(event['message'].split(' | '))])"}if [request] {ruby {init => "@kname = ['method','uri','verb']"code => "event.append(Hash[@kname.zip(event['request'].split(' '))])"}if [uri] {ruby {init => "@kname = ['url_path','url_args']"code => "event.append(Hash[@kname.zip(event['request'].split('?'))])"}kv {prefix => "url_"source => "url_args"field_split => "& "remove_field => [ "url_args","uri","request" ]}}}mutate {convert => ["body_bytes_sent" , "integer","content_length", "integer","upstream_response_time", "float","request_time", "float"]}}output {elasticsearch {embedded => falseprotocol => "http"host => "localhost"port => "9200"}}
3,启动
./logstash -f ../etc/logstash_indexer.conf
Kibana安装
2,解压安装默认端口5601;
3,在浏览器中打开
http://192.168.1.253:5601/进行相关配置(需要用高版本浏览器,不然一直卡在加载页面 坑)
安装完成
基本环境搭建完成,后面就是针对需要的数据进行各种处理;多服务器下的应用场景考虑了
其他
NGINX日志格式:
log_format main "$http_x_forwarded_for | $time_local | $request | $status | $body_bytes_sent | "
"$request_body | $content_length | $http_referer | $http_user_agent "
"$http_cookie | $remote_addr | $hostname | $upstream_addr | $upstream_response_time | $request_time";