Logstash+Elasticsearch+Kibana
- Logstash:监控,过滤,收集日志
- Elasticsearch:存储日志,提供搜索功能
- kibana:提供web界面,支持查询,统计,和图表展现。
filebeat:轻量级的日志收集工具 很多公司都采用该架构构建分布式日志系统,包括新浪微博,freewheel ...等
效果图
Filebeat用于日志收集和传输,相比Logstash更加轻量级和易部署,对系统资源开销更小
轻量级的日志收集工具,使用go语言开发
安装部署【不依赖java】
1:下载https://www.elastic.co/downloads/past-releases/filebeat-1-3-1
2:tar -zxvf filebeat-1.3.1-x86_64.tar.gz
3:配置filebeat.yml文件
4:启动 ./filebeat -c filebeat.yml 帮助命令:./filebeat -h
5:后台运行 nohup ./filebeat -c filebeat.yml >dev/null 2>&1 &
6:(可选)为了后期调试和排除问题方便。建议开启日志 修改filebeat.yml配置文件Logging部分
filebeat.yml配置详解
################### Filebeat Configuration Example #########################
############################# Filebeat ######################################
filebeat:
# List of prospectors to fetch data.
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# To fetch all ".log" files from a specific level of subdirectories
# /var/log/*/*.log can be used.
# For each file found under this path, a harvester is started.
# Make sure not file is defined twice as this can lead to unexpected behaviour.
# 指定要监控的日志,可以指定具体得文件或者目录
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Configure the file encoding for reading files with international characters
# following the W3C recommendation for HTML5 (http://www.w3.org/TR/encoding).
# Some sample encodings:
# plain, utf-8, utf-16be-bom, utf-16be, utf-16le, big5, gb18030, gbk,
# hz-gb-2312, euc-kr, euc-jp, iso-2022-jp, shift-jis, ...
# 指定被监控的文件的编码类型,使用plain和utf-8都是可以处理中文日志的
#encoding: plain
# Type of the files. Based on this the way the file is read is decided.
# The different types cannot be mixed in one prospector
#
# Possible options are:
# * log: Reads every line of the log file (default)
# * stdin: Reads the standard in
# 指定文件的输入类型log(默认)或者stdin
input_type: log
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list. The include_lines is called before
# 在输入中排除符合正则表达式列表的那些行。
# exclude_lines. By default, no lines are dropped.
# exclude_lines: ["^DBG"]
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list. The include_lines is called before
# exclude_lines. By default, all the lines are exported.
# 包含输入中符合正则表达式列表的那些行(默认包含所有行),include_lines执行完毕之后会执行exclude_lines
# include_lines: ["^ERR", "^WARN"]
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
# 忽略掉符合正则表达式列表的文件
# exclude_files: [".gz$"]
# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
# 向输出的每一条日志添加额外的信息,比如“level:debug”,方便后续对日志进行分组统计。
# 默认情况下,会在输出信息的fields子目录下以指定的新增fields建立子目录,例如fields.level
# 这个得意思就是会在es中多添加一个字段,格式为 "filelds":{"level":"debug"}
#fields:
# level: debug
# review: 1
# Set to true to store the additional fields as top level fields instead
# of under the "field