filebeat+logstash日志采集Invalid version of beats protocol错误

本文解决了一个关于Logstash接收filebeat发送的Nginx日志时出现的“Invalidversionofbeatsprotocol”错误。通过调整filebeat配置文件解决了该问题。

项目使用filebeat采集Nginx日志送到Logstash进行格式化后送到ElasticSearch,filebeat和Logstash成功启动后logstash报了一个错误(Invalid version of beats protocol),如下:

[2021-03-19T11:42:55,089][INFO ][org.logstash.beats.BeatsHandler][main][beb2be8c82db2bab06de95da0bf4611dd78ef6050f2c4ec2e7f5444d4ea7eb8c] [local: 172.16.11.127:5044, remote: 10.168.60.113:38356] Handling exception: io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69 (caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69)
[2021-03-19T11:42:55,091][WARN ][io.netty.channel.DefaultChannelPipeline][main][beb2be8c82db2bab06de95da0bf4611dd78ef6050f2c4ec2e7f5444d4ea7eb8c] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:404) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:371) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:354) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext.access$300(AbstractChannelHandlerContext.java:61) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext$4.run(AbstractChannelHandlerContext.java:253) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.49.Final.jar:4.1.49.Final]
        at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69
        at org.logstash.beats.Protocol.version(Protocol.java:22) ~[logstash-input-beats-6.0.14.jar:?]
        at org.logstash.beats.BeatsParser.decode(BeatsParser.java:62) ~[logstash-input-beats-6.0.14.jar:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
        ... 11 more

查了一溜资料和论坛,只查到一个可能的原因:TSL不匹配。经过各种尝试无果后,仔细读filebeat和Logstash的启动日志,终于发现了错误原因。

filebeat启动日志中报了一个错误日志:

2021-03-19T11:42:40.049+0800    INFO    [publisher_pipeline_output]     pipeline/output.go:143  Connecting to backoff(elasticsearch(http://172.16.11.127:5044))
2021-03-19T11:42:40.049+0800    INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2021-03-19T11:42:40.049+0800    INFO    [publisher]     pipeline/retry.go:223     done
2021-03-19T11:42:41.317+0800    ERROR   [publisher_pipeline_output]     pipeline/output.go:154  Failed to connect to backoff(elasticsearch(http://172.16.11.127:5044)): Get "http://172.16.11.127:5044": read tcp 10.168.60.113:38348->172.16.11.127:5044: read: connection reset by peer
2021-03-19T11:42:41.317+0800    INFO    [publisher_pipeline_output]     pipeline/output.go:145  Attempting to reconnect to backoff(elasticsearch(http://172.16.11.127:5044)) with 1 reconnect attempt(s)

排查filebeat.yml发现由于之前数据直接输出给ElasticSearch,配置没有注释导致,


# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["172.16.11.127:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  hosts: ["172.16.11.127:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

将output.logstash:注释放开,output.elasticsearch:注释加上,修改为如下:

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["172.16.11.127:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

重启,问题解决!

使用FilebeatLogstash和Kafka对Spring Boot项目日志进行采集的案例流程通常如下: ### 日志产生 在Spring Boot应用中,可通过配置`logback.xml`文件来产生日志并将其发送到Logstash。示例配置如下: ```xml <?xml version="1.0" encoding="UTF-8"?> <configuration> <property resource="properties/logback-variables.properties" /> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder charset="UTF-8"> <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern> </encoder> </appender> <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>172.16.1.16:9250</destination> <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" /> </appender> <root level="info"> <appender-ref ref="STDOUT" /> <appender-ref ref="LOGSTASH" /> </root> </configuration> ``` 此配置将日志输出到控制台的同时,也会发送到指定的Logstash地址 [^4]。 ### Filebeat配置 编辑`filebeat-7.9.3/filebeat.yml`文件,配置要读取的Spring Boot应用的日志路径,不同的服务可配置不同的日志路径。示例如下: ```yaml filebeat.inputs: - type: log enabled: true paths: - /home/aisys/logs/member-service/*.log fields: log_topic: member-service ``` 该配置使Filebeat读取指定路径下的日志文件,并为日志添加自定义字段`log_topic` [^3]。 ### 日志收集与处理 设置Kafka作为日志收集的中间件,配置Logstash以接收和处理从Filebeat发送过来的日志。具体步骤为:先由Filebeat将收集到的日志发送到Kafka,再由Logstash从Kafka接收日志进行处理,最后将处理后的日志存入Elasticsearch中 [^1]。 ### 日志可视化 在Docker环境下部署Logstash日志收集工具,完成日志采集后,将数据写入Elasticsearch,再用Kibana进行可视化展示 [^2]。
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值