在云原生时代,Kubernetes与Docker的普及让容器化部署成为常态,但随之而来的日志管理却让运维团队头疼不已:
🔹 容器动态漂移,日志散落何处?
🔹 海量日志如何实时采集、过滤与检索?
🔹 传统日志工具为何在微服务架构下力不从心?
ELK Stack(Elasticsearch + Logstash + Kibana) 正是破解这一困局的黄金组合,而 Logstash 作为核心“管道工”,承担着日志清洗、转换与分发的重任。本文将手把手带你部署Logstash,构建高效的容器日志收集体系!
logstash
01
下载Logstash Chart包
-
获取官方Chart
$ helm repo add elastic https://helm.elastic.co --force-update
"elastic" has been added to your repositories
$ helm pull elastic/logstash --version 7.13.3
-
推送至私有仓库
$ helm push logstash-7.13.3.tgz oci://core.jiaxzeng.com/plugins
Pushed: core.jiaxzeng.com/plugins/logstash:7.13.3
Digest: sha256:e833b285f22b9fcdd4eb88943d440e4bcf064dab0139f246299322afca969298
02
配置Logstash部署文件
fullnameOverride: logstash
replicas:3
image:"core.jiaxzeng.com/library/logstash"
logstashJavaOpts:"-Xmx1g -Xms1g"
resources:
requests:
cpu:"100m"
memory:"1536Mi"
limits:
cpu:"1000m"
memory:"1536Mi"
volumeClaimTemplate:
accessModes:["ReadWriteOnce"]
resources:
requests:
storage:1Gi
httpPort:9600
logstashPipeline:
logstash.conf:|
input {
kafka {
# bootstrap_servers => "xxx:9092,xxx:9092,xxx:9092"
# topics => ["k8s_logs"]
# group_id => "test"
# 安全协议为SSL
# bootstrap_servers => "xxx:9093,xxx:9093,xxx:9093"
# topics => ["k8s_logs"]
# group_id => "test"
# security_protocol => "SSL"
# ssl_truststore_location => "/usr/share/logstash/certs/kafka.server.truststore.p12"
# ssl_truststore_password => "truststore_password"
# ssl_truststore_type => "PKCS12"
# SASL认证机制为SCRAM-SHA-512
# bootstrap_servers => "xxx:9094,xxx:9094,xxx:9094"
# topics => ["k8s_logs"]
# group_id => "test"
# security_protocol => "SASL_PLAINTEXT"
# sasl_mechanism => "SCRAM-SHA-512"
# sasl_jaas_config => "org.apache.kafka.common.security.scram.ScramLoginModule required username='admin' password='admin-password';"
# 安全协议为SASL_SSL
bootstrap_servers=>"xxx:9095,xxx:9095,xxx:9095"
topics=>["k8s_logs"]
group_id=>"k8s_logs"
security_protocol=>"SASL_SSL"
sasl_mechanism=>"SCRAM-SHA-512"
sasl_jaas_config=>"org.apache.kafka.common.security.scram.ScramLoginModule required username='admin' password='admin-password';"
ssl_truststore_location=>"/usr/share/logstash/certs/kafka/kafka.server.truststore.p12"
ssl_truststore_password=>"truststore_password"
ssl_truststore_type=>"PKCS12"
}
}
filter{
json{source=>"message"}
# 依赖上面的json filter
mutate{
remove_field=>["container","agent","log","input","ecs","host","@version","fields","@metadata"]
}
}
output{
elasticsearch{
hosts=>["https://elasticsearch.obs-system.svc:9200"]
# es帐密
user=>"elastic"
password=>"admin@123"
# ssl配置
ssl=>true
ssl_certificate_verification=>true
truststore=>"/usr/share/logstash/certs/es/http.p12"
truststore_password=>"http.p12"
# logstash使用es ilm功能
ilm_enabled=>true
ilm_rollover_alias=>"k8s-logs"
ilm_pattern=>"{now/d}-000001"
ilm_policy=>"jiaxzeng"
# 关键配置:禁用自动模板创建,指定使用现有模板
manage_template=>false
template_name=>"k8s-logs"
}
}
# logstash监控不上报给es
extraEnvs:
-name:XPACK_MONITORING.ENABLED
value:"false"
secretMounts:
-name:kafka-ssl
secretName:kafka-ssl-secret
path:/usr/share/logstash/certs/kafka
-name:es-ssl
secretName:elastic-certificates
path: /usr/share/logstash/certs/es
Tip:注意ILM和index template需要提前创建好,如不清楚请查看上一篇文章。
03
部署Logstash
$ helm -n obs-system install logstash -f logstash-values.yaml logstash
NAME: logstash
LAST DEPLOYED: Tue Mar 25 16:38:16 2025
NAMESPACE: obs-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Watch all cluster members come up.
$ kubectl get pods --namespace=obs-system -l app=logstash -w
04
验证Logstash
1、验证Pod运行正常
$ kubectl -n obs-system get pod -l app=logstash
NAME READY STATUS RESTARTS AGE
logstash-0 1/1 Running 0 31m
logstash-1 1/1 Running 0 31m
logstash-2 1/1 Running 0 31m
2、查看是否有生成index
05
结语
通过Logstash构建的日志中枢,容器日志不再是散落的碎片,而是可观测性体系的基石:
👉 实时监控:结合Kibana仪表盘快速定位故障
👉 成本优化:自动归档冷数据至对象存储
👉 智能告警:基于日志级别触发企业微信/钉钉通知