零 前期工作
0.1 环境
端口
elasticsearch:9200、9300
kibana:5601
logstash:5044
filebeat:
依赖jdk
elasticsearch: jdk11或以上
logstash: jdk11或以上
功能
ELKF 是 Elasticsearch 、 Logstash 、 Kibana 、 Filebeat 的简称。
elasticsearch: 存储、搜索和分析引擎,特点是高可伸缩、高可靠和易管理等。
kibana: 数据分析和可视化平台,通常依赖 elasticsearch 。
logstash: 数据收集引擎,可以对数据进行过滤、分析、丰富、统一格式等操作,存储到用户指定的位置,包含但不限于文件、 elasticsearch 。
filebeat: 轻量级的开源日志文件数据搜集器,负责对服务的日志进行收集。
简单归纳为:FileBeat收集日志、Logstash解析格式化、Elasticsearch存储、Kibana分析。
0.2 文件上传
将相关文件相继上传到目录/home/elkf 下
0.3 文件解压
cd /home/elkf
tar -xzvf elasticsearch-7.17.0-linux-x86_64.tar.gz
tar -xzvf kibana-7.17.0-linux-x86_64.tar.gz
tar -xzvf logstash-7.17.0-linux-x86_64.tar.gz
tar -xzvf filebeat-7.17.0-linux-x86_64.tar.gz
一 elasticsearch
1.1 配置elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
cluster.name: elasticsearch_prod
cluster.routing.allocation.disk.threshold_enabled: false
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
node.name: node-001-data
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
path.data: /home/elkf/elasticsearch-7.17.0/datas
#
# Path to log files:
#
#path.logs: /path/to/logs
path.logs: /home/elkf/elasticsearch-7.17.0/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and clu