https://www.elastic.co/guide/en/logstash/current/index.html
支持的插件
https://www.elastic.co/guide/en/logstash/current/input-plugins.html
Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” (Ours is Elasticsearch, naturally.)
To verify your configuration, run the following command:
bin/logstash -f first-pipeline.conf --config.test_and_exit
The --config.test_and_exit option parses your configuration file and reports any errors.
If the configuration file passes the configuration test, start Logstash with the following command:
bin/logstash -f first-pipeline.conf --config.reload.automatic
The --config.reload.automatic option enables automatic config reloading so that you don’t have to stop and restart Logstash every time you modify the configuration file.
If your pipeline is working correctly, you should see a series of events like the following written to the console:
cd logstash-5.5.1
bin/logstash -e 'input { stdin { } } output { stdout {} }’
bin/logstash-plugin list
bin/logstash-plugin list --verbose
bin/logstash-plugin list '*namefragment*'
bin/logstash-plugin list --group output
|
| Will list all installed plugins |
| Will list installed plugins with version information | |
| Will list all installed plugins containing a namefragment | |
| Will list all installed plugins for a particular group (input, filter, codec, output) |
安装插件
bin/logstash-plugin install logstash-output-kafka
更新插件
bin/logstash-plugin update
bin/logstash-plugin update logstash-output-kafka
| will update all installed plugins | |
| will update only this plugin |
删除插件
bin/logstash-plugin remove logstash-output-kafka
安装jdbc的插件
bin/logstash-plugin install logstash-input-jdbc
input {
jdbc {
jdbc_driver_library => "/Users/jiaozhiguang/.m2/repository/mysql/mysql-connector-java/6.0.6/mysql-connector-java-6.0.6.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/test"
jdbc_user => "admin"
jdbc_password => "admin"
schedule => "* * * * *"
statement => "SELECT * from sys_permission"
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
Predefined Parametersedit
Some parameters are built-in and can be used from within your queries. Here is the list:
sql_last_value
|
| The value used to calculate which rows to query. Before any query is run, this is set toThursday, 1 January 1970, or 0 if use_column_value is true and tracking_column is set. It is updated accordingly after subsequent queries are run. |
Example:
input {
jdbc {
statement => "SELECT id, mycolumn1, mycolumn2 FROM my_table WHERE id > :sql_last_value"
use_column_value => true
tracking_column => "id"
# ... other configuration bits
}
}
filter {
json {
source => "message"
remove_field => ["message"]
}
}
output {
elasticsearch {
host => "192.168.0.199"
port => "9200"
protocol => "http"
index => "mysql01"
document_id => "%{id}"
cluster => "logstash-elasticsearch"
}
stdout {
codec => json_lines
}
}
全文检索
POST /megacorp/employee/_search
{
"query" : {
"match" : {
"about" : "rock climbing"
}
}
}
Kibana
https://www.elastic.co/products/kibana
https://www.elastic.co/guide/en/kibana/current/index.html
启动
./bin/kibana
本文介绍如何使用 Logstash 进行数据收集、转换和发送到 Elasticsearch 的过程。涵盖配置文件测试、自动重载、插件管理等内容,并提供了一个从 MySQL 数据库读取数据并写入 Elasticsearch 的示例。
4252

被折叠的 条评论
为什么被折叠?



