ELK 5.4.1 安装: https://blog.51cto.com/xiumin/1933143;https://blog.51cto.com/zero01/2079879 ;
ElasticSearch:55data,56(data,master),57data
Kibana:55
Logstash:56
Logstash: systemctl start logstash
1.创建kafka topic:
bin/kafka-topics.sh --create --topic mytp3 --zookeeper 10.0.44.55:2181 --replication-factor 1 --partitions 1
bin/kafka-topics.sh --list --zookeeper 10.0.44.55:2181
2.创建一个生产者:
bin/kafka-console-producer.sh --broker-list 10.0.44.55:9092 --topic mytp3
生产数据:
93.123.23.12 GET /index.html 15824 0.043
93.123.23.1 GET /index.html 15824 0.046
197.199.253.1 GET /index.html 15824 0.045
197.199.253.15 GET /index.html 15824 0.047
218.189.25.129 GET /index.html 15824 0.043
218.189.25.130 GET /index.html 15824 0.046
149.126.86.1 GET /index.html 15824 0.045
149.126.86.7 GET /index.html 15824 0.047
218.176.242.4 GET /index.html 15824 0.043
218.176.242.7 GET /index.html 15824 0.046
93.123.23.12 GET /index.html 15824 0.043
93.123.23.1 GET /index.html 15824 0.046
197.199.253.1 GET /index.html 15824 0.045
197.199.253.15 GET /index.html 15824 0.047
218.189.25.129 GET /index.html 15824 0.043
218.189.25.130 GET /index.html 15824 0.046
149.126.86.1 GET /index.html 15824 0.045
149.126.86.7 GET /index.html 15824 0.047
218.176.242.4 GET /index.html 15824 0.043
218.176.242.7 GET /index.html 15824 0.046
2.1创建一个消费者
bin/kafka-console-consumer.sh --bootstrap-server 10.0.44.55:9092 --topic mytp3 --from-beginning
3.定义logstash的conf
input{
kafka{
bootstrap_servers => ["10.0.44.55:9092"]
client_id => "test1"
group_id => "test1"
auto_offset_reset => "earliest"
consumer_threads => 1
decorate_events => true
topics => ["mytp2"]
type => "mykafkalog"
}
}
filter{
if[type] == "mykafkalog"{
grok {
match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
}
geoip {
source => "client"
target => "geoip"
}
}
}
output {
if[type] == "mykafkalog"{
elasticsearch{
hosts => ["10.0.44.55:9200"]
index => "logstash-kafkageo-%{+YYYY.MM.dd}"
}
}
}
#logstash-*的模板(mapping)中默认location为geo_point类型
4.检测conf
cd /usr/share/logstash/bin
./logstash --path.settings /etc/logstash/ -f /lihua/logConf/kafka-es3.conf --config.test_and_exit
5.启动logstash
./logstash --path.settings /etc/logstash/ -f /lihua/logConf/kafka-es3.conf
============================
ES:
基本概念:
在Elasticsearch中,文档归属于一种类型(type),而这些类型存在于索引(index)中,我们可以 画一些简单的对比图来类比传统关系型数据库:
Relational DB -> Databases -> Tables -> Rows -> Columns Elasticsearch -> Indices -> Types -> Documents -> Fields
Elasticsearch集群可以包含多个索引(indices)(数据库),每一个索引可以包含多个类型 (types)(表),每一个类型包含多个文档(documents)(行),然后每个文档包含多个字段 (Fields)(列)
curl '10.0.44.55:9200/_cluster/health?pretty'
curl '10.0.44.55:9200/_cluster/state?pretty'
curl '10.0.44.55:9200/_cat/indices?v'
curl -X GET '10.0.44.55:9200/kafkalog-2019.04.29/_search?pretty'
请求/Index/type/id
curl '10.0.44.55:9200/kafkalog-2019.04.29/doc/mJkSZ2oBP0SUE5RK_Suj?pretty=true'
/Index/Type/_search查所有
curl '10.0.44.55:9200/kafkalog-2019.04.29/doc/_search?pretty=true'
删除:
curl -X DELETE '10.0.44.55:9200/kafkalog-2019.04.29/doc/mJkSZ2oBP0SUE5RK_Suj'
条件查询:
curl -H "Content-Type: application/json" '10.0.44.55:9200/kafkalog-2019.04.29/doc/_search?pretty' -d '
{
"query" : { "match" : { "client" : "10.0.44.55" }},
"from" : 1,
"size":2
}'
或运算:"10.0.44.55 10.0.44.56"
curl -H "Content-Type: application/json" '10.0.44.55:9200/kafkalog-2019.04.29/doc/_search?pretty' -d '
{
"query" : { "match" : { "client" : "10.0.44.55 10.0.44.56"}}
}'
与运算必需使用bool查询 must、should、must_not
curl -H "Content-Type: application/json" '10.0.44.55:9200/kafkalog-2019.04.29/doc/_search?pretty' -d '
{
"query":{
"bool":{
"must":[
{ "match" : { "client" : "10.0.44.55"}},
{ "match" : { "_id" : "pJmGZ2oBP0SUE5RKoytU"}}
]
}
},
"highlight":{
"fields":{
"client":{}
}
}
}'
嵌套查询:
curl -H "Content-Type: application/json" '10.0.44.55:9200/kafkalog-2019.04.29/doc/_search?pretty' -d '
{
"query":{
"bool":{
"should":[
{ "match" : { "client" : "10.0.44.56"}},
{ "bool" : {"must":[
{ "match" : { "client" : "10.0.44.55"}},
{ "match" : { "_id" : "pJmGZ2oBP0SUE5RKoytU"}}
]}}
]
}
}
}'
range查询:gt> lt< gte>= lte<=
curl -H "Content-Type: application/json" '10.0.44.55:9200/kafkalog-2019.04.29/doc/_search?pretty' -d '
{
"query":{
"range":{
"bytes":{
"gt": 15823,
"lt": 15825
}
}
}
}'
聚合查询;
curl -H "Content-Type: application/json" '10.0.44.55:9200/kafkalog-2019.04.29/doc/_search?pretty' -d '
{
"aggs":{
"clients":{
"terms":{"field" : "client.keyword"}
}
}
}'
#获取指定字段
curl '10.0.44.55:9200/kafkalog-2019.04.29/doc/mpkSZ2oBP0SUE5RK_Suj?_source=client,request'
#获取模板约束
curl -XGET '10.0.44.55:9200/kafkaloggeo-2019.04.30/_mapping/
============================
Kibana: http://10.0.44.55:5601