logstash

安装

sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

vi /etc/yum.repos.d/logstash.repo

[logstash-8.x] name=Elastic repository for 8.x packages baseurl=https://artifacts.elastic.co/packages/8.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md

 sudo yum install logstash

安装拓扑

logstash.service   :   /etc/systemd/system/logstash.service

home:   /usr/share/logstash

bin:         /usr/share/logstash/bin

config:    /etc/logstash

log:         /var/log/logstash/

plugins:  /usr/share/logstash/plugins

data:      /var/lib/logstash(包含 .lock,)

配置

pipeline configuration files        which define the Logstash processing pipeline

/etc/logstash/conf.d    

 settings files                             which specify options that control Logstash startup and execution

/etc/logstash/logstash.yml        

/etc/logstash/pipelines.yml  Contains the framework and instructions for running multiple pipelines in a single Logstash instance.

/etc/logstash/jvm.options    Contains JVM configuration flags. Use this file to set initial and maximum values for total heap space.

/etc/logstash/startup.options 

测试安装

第一个pipeline

cd /usr/share/logstash/

bin/logstash -e 'input {stdin{}} output{stdout{}}'          会打开一个shell,等待一会会出现The stdin plugin is now waiting for input:

输入hello world 回车,会打印

{
"@version" => "1",
"message" => "hello world",
"@timestamp" => 2022-04-26T09:18:26.485741Z,
"event" => {
"original" => "hello world"
},
"host" => {
"hostname" => "10-52-6-111"
}
}

安装成功。

 ----------------------------------------------------------------------------------------------------

pipeline 配置

pwd
/etc/logstash/conf.d

最终可用配置:

相关logstash,kafka,es版本   

Using bundled JDK: /usr/share/logstash/jdk   logstash 8.1.3

es    7.16.3

kafka  2.7.1

cat /etc/logstash/conf.d/kafka_to_es.conf
input{

kafka {
bootstrap_servers => "106:9093,10.1:9093,10:9093"
topics => "vslogs"
group_id => "vslogs_group_id_1"
client_id => "vslogs_client_id_1"
auto_offset_reset => "latest"
consumer_threads => 3
decorate_events => true
type => "vslogs"
codec => "json"
sasl_mechanism => "SCRAM-SHA-256"
security_protocol => "SASL_PLAINTEXT"
sasl_jaas_config => "org.apache.kafka.common.security.scram.ScramLoginModule required username='' password='';"

}

kafka {
bootstrap_servers => "6:9093,2.31:9093,1.4.0.112:9093"
topics => "vsulblog"
group_id => "vsulblog_group_id_1"
client_id => "vsulblog_client_id_1"
auto_offset_reset => "latest"
consumer_threads => 3
decorate_events => true
type => "vsulblog"
codec => "json"
sasl_mechanism => "SCRAM-SHA-256"
security_protocol => "SASL_PLAINTEXT"
sasl_jaas_config => "org.apache.kafka.common.security.scram.ScramLoginModule required username='' password='';"

}
}

filter{}

output {
if ([type]=="vslogs" ) {
elasticsearch{
hosts => [ ":9200", ":9200", ":9200" ]
index => "vs-vslogs"
user => ""
password => ""
}
}
if ([type]=="vsulblog" ) {
elasticsearch{
hosts => [ ":9200", ":9200", "1.4.1.9:9200" ]
index => "vs-vsulblog"
user => ""
password => ""
}
}
}

配置文件注意情况:

1,当input里面有多个kafka输入源时,client_id => "client1",
client_id => "client2"
必须添加且需要不同

In cases when multiple inputs are being used in a single pipeline, reading from different topics,
it’s essential to set a different group_id => ... for each input. Setting a unique client_id => ... is also recommended.

2,
  topics  => "accesslogs"  -- 旧版本的logstash需要使用参数:topic_id
        bootstrap_servers => "JANSON01:9092,JANSON02:9092,JANSON03:9092" -- 旧版本的logstash需要使用参数:zk_connect=>"JANSON01:2181,xx"



从文件读,写入kafka配置:

input {
file {
codec => plain {
charset => "UTF-8"
}
path => "/root/logserver/gamelog.txt" //tmp/log/* 路径下所有
discover_interval => 5
start_position => "beginning"
}
}

output {
kafka {
topic_id => "gamelogs"
codec => plain {
format => "%{message}"
charset => "UTF-8"
}
bootstrap_servers => "node01:9092,node02:9092,node03:9092"
}
}

bin使用:

查看已安装的插件列表

/usr/share/logstash/bin/logstash-plugin list

 检查配置是否书写正确

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/kafka_to_es.conf --config.test_and_exit

做成服务运行

systemctl cat logstash
# /etc/systemd/system/logstash.service
[Unit]
Description=logstash

[Service]
Type=simple
User=logstash
Group=logstash
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
EnvironmentFile=-/etc/default/logstash
EnvironmentFile=-/etc/sysconfig/logstash
ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash"
Restart=always
WorkingDirectory=/
Nice=19
LimitNOFILE=16384

# When stopping, how long to wait before giving up and sending SIGKILL?
# Keep in mind that SIGKILL on a process can cause data loss.
TimeoutStopSec=infinity

[Install]
WantedBy=multi-user.target

参考连接:

https://www.elastic.co/guide/en/logstash/current/index.html

https://cloud.tencent.com/developer/article/1353068?from=10680

https://www.cnblogs.com/lshan/p/14121342.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值