Linux 部署 elk,不用docker,es、kibana、filebeat、logstash

一.下载资源

wget https://artifacts.elastic.co/downloads/kibana/kibana-7.17.3-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.3-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.17.3-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.3-linux-x86_64.tar.gz

二.部署es

1.解压,配置

# 如果有多个实例,2代表有2个
node.max_local_storage_nodes: 2
# 数据空间隔离
path.data: /home/elk/es/elasticsearch-7.17.3/data
node.name: node-1
cluster.initial_master_nodes: ["node-1"]

path.logs: /home/elk/es/elasticsearch-7.17.3/logs
network.host: 0.0.0.0
http.host: 0.0.0.0
http.port: 9201
discovery.seed_hosts: ["127.0.0.1"]

2.启动,在浏览器访问 127.0.0.1:9201 ,可以看到es已启动

三.kibana

1.配置kibana.yml

server.port: 5601 
server.host: "0.0.0.0" 
#i18n.locale: "zh-CN"转中文
i18n.locale: "zh-CN"
elasticsearch.hosts: ["http://127.0.0.1:9201"] 
kibana.index: ".kibana"

2. 启动命令 /bin/kibana

3. 浏览器输入127.0.0.1:9201,可以看到kibana已经启动成功

接下来要收集服务器的日志,写入的es中,有三种方案

1.使用logstash收集。logstash 需要用到JDK,不同版本的logstash需要的JDK版本又不一样, 这是如果 logstash的JDK版本和java服务的版本不一样,就比较麻烦,需要来回切

2.使用filebeat收集日志,直接传输到es中,但是由于版本、配置等各种问题,很容易发生指定的index无效,而写入了默认index,例如filebeat-7.17.3-2024.11.02-000001

3.使用filebeat收集日志,先传输到logstash上,再传到es中。以下演示这个版本

四.filebeat

1.在java项目所在服务器 安装filebeat,进入filebeat.yml 中配置

filebeat.inputs:
- type: log
  id: uat1
  enabled: true
  encoding: utf-8
  paths:
    - /var/XXX/spring.log/*.log
  multiline.pattern: '^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}'
  multiline.negate: true
  multiline.match: after
  prospector.scanner.exclude_files: ['.gz$']
  fields:
    tag: uat1
    type: erp-logdata-pipeline 
    source: common
    service_name: uat1

- type: log
  id: uat2
  enabled: true
  encoding: utf-8
  paths:
    - /var/XXX/*.log
  multiline.pattern: '^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}'
  multiline.negate: true
  multiline.match: after
  prospector.scanner.exclude_files: ['.gz$']
  fields:
    tag: uat2
    type: uat2
    source: common
    service_name: uat2

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1

# =================================== Kibana ===================================
#setup.kibana:
#  host: "https://192.168.6.220:30091"
#  #可以到kibana控制台,创建一个账号
#  username: "kibanaerp"
#  password: "replace your password"

# ================================== Outputs ===================================

output.logstash:
  hosts: ["XXX:5044"]

logging.level: info

processors:
  - drop_fields:
     fields: ["log","host","input","agent","ecs"]
     ignore_missing: false

2.启动filebeat

五.最后来配置logstash

1.进入/config,拷贝一份logstash-sample.conf到bin目录下,方便后面启动

2.可以先随便配置一份,看看传输过来的数据是怎样的

input {
  beats {
    # filebeat的输出端口
    port => 5044
  }
}
output {
    # 打印数据
    stdout {
        codec => rubydebug
    }
}

可以看到数据如下:

{
      "@version" => "1",
        "fields" => {
              "source" => "common",
              "type" => "erp-logdata-pipeline",
              "tag" => "uat1",
              "service_name" => "uat1"
    },
    "@timestamp" => 2024-11-04T00:50:11.135Z,
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
       "message" => "2024-11-04 08:50:10.353  INFO 21771 --- [container-1346] com.listener.RedisListener   : =====redis取出来为空"
}

我们要动态设置es的index,这里取 fields里面的 service_name。所以logstash的配置如下:

input {
    beats {
        port => 5044
    }
}
filter {
    if [service_name] == "uat1" {
        mutate {
            add_field => { "index_name" => "uat1-%{+yyyy.MM.dd}" }
        }
    }
    else if [service_name] == "uat2" {
        mutate {
            add_field => { "index_name" => "uat2-%{+yyyy.MM.dd}"}
        }
    }
}
output {
        if [fields][service_name] == "uat1" {
            elasticsearch {
               hosts => ["127.0.0.1:9201"]
               index => "uat1-service-%{+yyyy.MM.dd}"
            }
        }
        else if [fields][service_name] == "uat2" {
            elasticsearch {
               hosts => ["127.0.0.1:9201"]
               index => "uat2-service-%{+yyyy.MM.dd}"
            }
        }
    stdout {
        codec => rubydebug
    }
}

温馨提示:es的index 配置最好不要用 %{} 通配符,有些版本不支持

至此,elk部署成功

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值