kcat - Apache Kafka producer and consumer tool


kcat 使用详解(原 kafkacat

kcat 是一个轻量级、命令行下的 Kafka 消费/生产/元数据查看工具,基于 librdkafka 构建,适合测试、调试 Kafka 消息流。支持 SASL、TLS、Schema Registry、Avro、JSON 等高级功能。


🔧 典型使用命令说明

docker run --rm --name=kcat --network=host \
  k8smaster.qfusion.irds/irds/kcat:1.7.1 \
  -b 10.10.x.x:2187 \
  -X security.protocol=SASL_PLAINTEXT \
  -X sasl.mechanism=SCRAM-SHA-512 \
  -X sasl.username='kafka-6c7c3c69-admin' \
  -X sasl.password='xxx' \
  -t bpx \
  -L

🔍 参数解释

参数说明
--rm容器退出后自动删除
--network=host共享宿主机网络,用于连接 Kafka 主机端口
-b <ip:port>Kafka Broker 地址(多个逗号分隔)
-X security.protocol=...设置 Kafka 安全协议(如 SASL_PLAINTEXT, SASL_SSL
-X sasl.mechanism=...使用的认证机制,如 PLAIN, SCRAM-SHA-256, SCRAM-SHA-512
-X sasl.username=...Kafka 登录用户名
-X sasl.password=...Kafka 登录密码
-t <topic>指定 Topic 名称(如 bpx)
-L查看元数据(列出所有 Topic、分区等)

🧭 使用模式总览

模式命令描述
消费模式-C从 Kafka 消费消息
生产模式-P向 Kafka 发送消息
元数据模式-L列出 Broker 和 Topic 元数据信息
查询模式-Q查询某时间戳对应的 offset

📥 消费 Kafka 消息

kcat -C -b 10.10.x.x:2187 -t test-topic -o beginning -e -c 10
  • -C:消费模式
  • -t:Topic 名
  • -o beginning:从起始位置读取(支持:beginningend、具体 offset、相对 offset 如 -10
  • -c 10:最多读取 10 条消息
  • -e:读取完最后一条消息后退出

📤 生产 Kafka 消息

echo "hello kafka" | kcat -P -b 10.10.x.x:2187 -t test-topic
  • -P:生产模式,从 stdin(或文件)读取消息并发送
  • -t:指定发送的 Topic

🧾 查询指定时间戳的 offset

kcat -Q -b 10.10.x.x:2187 -t mytopic:0:1684200000000
  • -Q:查询模式
  • -t topic:partition:timestamp:查询某分区在给定时间戳(ms)对应的 offset

🗂 输出格式控制

  • -f:自定义输出格式
  • 常用格式变量如下:
变量含义
%s消息内容
%k消息键
%tTopic 名
%p分区编号
%o消息 offset
%T时间戳(ms)

示例:

kcat -C -b broker -t topic -f 'Topic %t [%p] offset %o: %s\n'

🔐 安全连接选项示例(SASL)

-X security.protocol=SASL_PLAINTEXT
-X sasl.mechanism=SCRAM-SHA-512
-X sasl.username=...
-X sasl.password=...
  • 支持的协议:PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL
  • 支持的机制:PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI, OAUTHBEARER

🧪 实用命令汇总

1. 查看 Kafka 所有 Topic 和分区信息
kcat -L -b broker_ip:port
2. 消费某 Topic 的最近一条消息
kcat -C -b broker -t topic -o -1 -c 1 -e
3. 消费消息并以 JSON 输出
kcat -C -b broker -t topic -J
4. 生产多条消息(批量发送)
printf "msg1\nmsg2\nmsg3\n" | kcat -P -b broker -t topic

📌 Tips

  • 如果你频繁使用 kcat,可以将配置保存到文件中,使用 -F 引用:

    cat ~/.config/kcat.conf
    security.protocol=SASL_PLAINTEXT
    sasl.mechanism=SCRAM-SHA-512
    sasl.username=kafka-user
    sasl.password=secret
    

    执行时只需:

    kcat -F ~/.config/kcat.conf -b broker -t topic -C
    

如果你希望我生成一份 企业标准格式的 kcat 使用手册文档(带截图、示例脚本等),也可以告诉我。

Usage: kcat <options> [file1 file2 .. | topic1 topic2 ..]]
kcat - Apache Kafka producer and consumer tool
https://github.com/edenhill/kcat
Copyright (c) 2014-2021, Magnus Edenhill
Version 1.7.1 (JSON, Avro, Transactions, IncrementalAssign, JSONVerbatim, librdkafka 1.8.2 builtin.features=gzip,snappy,ssl,sasl,regex,lz4,sasl_plain,sasl_scram,plugins,zstd,sasl_oauthbearer)


General options:
  -C | -P | -L | -Q  Mode: Consume, Produce, Metadata List, Query mode
  -G <group-id>      Mode: High-level KafkaConsumer (Kafka >=0.9 balanced consumer groups)
                     Expects a list of topics to subscribe to
  -t <topic>         Topic to consume from, produce to, or list
  -p <partition>     Partition
  -b <brokers,..>    Bootstrap broker(s) (host[:port])
  -D <delim>         Message delimiter string:
                     a-z | \r | \n | \t | \xNN ..
                     Default: \n
  -K <delim>         Key delimiter (same format as -D)
  -c <cnt>           Limit message count
  -m <seconds>       Metadata (et.al.) request timeout.
                     This limits how long kcat will block
                     while waiting for initial metadata to be
                     retrieved from the Kafka cluster.
                     It also sets the timeout for the producer's
                     transaction commits, init, aborts, etc.
                     Default: 5 seconds.
  -F <config-file>   Read configuration properties from file,
                     file format is "property=value".
                     The KCAT_CONFIG=path environment can also be used, but -F takes precedence.
                     The default configuration file is $HOME/.config/kcat.conf
  -X list            List available librdkafka configuration properties
  -X prop=val        Set librdkafka configuration property.
                     Properties prefixed with "topic." are
                     applied as topic properties.
  -X schema.registry.prop=val Set libserdes configuration property for the Avro/Schema-Registry client.
  -X dump            Dump configuration and exit.
  -d <dbg1,...>      Enable librdkafka debugging:
                     all,generic,broker,topic,metadata,feature,queue,msg,protocol,cgrp,security,fetch,interceptor,plugin,consumer,admin,eos,mock,assignor,conf
  -q                 Be quiet (verbosity set to 0)
  -v                 Increase verbosity
  -E                 Do not exit on non-fatal error
  -V                 Print version
  -h                 Print usage help

Producer options:
  -z snappy|gzip|lz4 Message compression. Default: none
  -p -1              Use random partitioner
  -D <delim>         Delimiter to split input into messages
  -K <delim>         Delimiter to split input key and message
  -k <str>           Use a fixed key for all messages.
                     If combined with -K, per-message keys
                     takes precendence.
  -H <header=value>  Add Message Headers (may be specified multiple times)
  -l                 Send messages from a file separated by
                     delimiter, as with stdin.
                     (only one file allowed)
  -T                 Output sent messages to stdout, acting like tee.
  -c <cnt>           Exit after producing this number of messages
  -Z                 Send empty messages as NULL messages
  file1 file2..      Read messages from files.
                     With -l, only one file permitted.
                     Otherwise, the entire file contents will
                     be sent as one single message.
  -X transactional.id=.. Enable transactions and send all
                     messages in a single transaction which
                     is committed when stdin is closed or the
                     input file(s) are fully read.
                     If kcat is terminated through Ctrl-C
                     (et.al) the transaction will be aborted.

Consumer options:
  -o <offset>        Offset to start consuming from:
                     beginning | end | stored |
                     <value>  (absolute offset) |
                     -<value> (relative offset from end)
                     s@<value> (timestamp in ms to start at)
                     e@<value> (timestamp in ms to stop at (not included))
  -e                 Exit successfully when last message received
  -f <fmt..>         Output formatting string, see below.
                     Takes precedence over -D and -K.
  -J                 Output with JSON envelope
  -s key=<serdes>    Deserialize non-NULL keys using <serdes>.
  -s value=<serdes>  Deserialize non-NULL values using <serdes>.
  -s <serdes>        Deserialize non-NULL keys and values using <serdes>.
                     Available deserializers (<serdes>):
                       <pack-str> - A combination of:
                                    <: little-endian,
                                    >: big-endian (recommended),
                                    b: signed 8-bit integer
                                    B: unsigned 8-bit integer
                                    h: signed 16-bit integer
                                    H: unsigned 16-bit integer
                                    i: signed 32-bit integer
                                    I: unsigned 32-bit integer
                                    q: signed 64-bit integer
                                    Q: unsigned 64-bit integer
                                    c: ASCII character
                                    s: remaining data is string
                                    $: match end-of-input (no more bytes remaining or a parse error is raised).
                                       Not including this token skips any
                                       remaining data after the pack-str is
                                       exhausted.
                       avro       - Avro-formatted with schema in Schema-Registry (requires -r)
                     E.g.: -s key=i -s value=avro - key is 32-bit integer, value is Avro.
                       or: -s avro - both key and value are Avro-serialized
  -r <url>           Schema registry URL (when avro deserializer is used with -s)
  -D <delim>         Delimiter to separate messages on output
  -K <delim>         Print message keys prefixing the message
                     with specified delimiter.
  -O                 Print message offset using -K delimiter
  -c <cnt>           Exit after consuming this number of messages
  -Z                 Print NULL values and keys as "NULL" instead of empty.
                     For JSON (-J) the nullstr is always null.
  -u                 Unbuffered output

Metadata options (-L):
  -t <topic>         Topic to query (optional)

Query options (-Q):
  -t <t>:<p>:<ts>    Get offset for topic <t>,
                     partition <p>, timestamp <ts>.
                     Timestamp is the number of milliseconds
                     since epoch UTC.
                     Requires broker >= 0.10.0.0 and librdkafka >= 0.9.3.
                     Multiple -t .. are allowed but a partition
                     must only occur once.

Format string tokens:
  %s                 Message payload
  %S                 Message payload length (or -1 for NULL)
  %R                 Message payload length (or -1 for NULL) serialized
                     as a binary big endian 32-bit signed integer
  %k                 Message key
  %K                 Message key length (or -1 for NULL)
  %T                 Message timestamp (milliseconds since epoch UTC)
  %h                 Message headers (n=v CSV)
  %t                 Topic
  %p                 Partition
  %o                 Message offset
  \n \r \t           Newlines, tab
  \xXX \xNNN         Any ASCII character
 Example:
  -f 'Topic %t [%p] at offset %o: key %k: %s\n'

JSON message envelope (on one line) when consuming with -J:
 { "topic": str, "partition": int, "offset": int,
   "tstype": "create|logappend|unknown", "ts": int, // timestamp in milliseconds since epoch
   "broker": int,
   "headers": { "<name>": str, .. }, // optional
   "key": str|json, "payload": str|json,
   "key_error": str, "payload_error": str, //optional
   "key_schema_id": int, "value_schema_id": int //optional
 }
 notes:
   - key_error and payload_error are only included if deserialization fails.
   - key_schema_id and value_schema_id are included for successfully deserialized Avro messages.

Consumer mode (writes messages to stdout):
  kcat -b <broker> -t <topic> -p <partition>
 or:
  kcat -C -b ...

High-level KafkaConsumer mode:
  kcat -b <broker> -G <group-id> topic1 top2 ^aregex\d+

Producer mode (reads messages from stdin):
  ... | kcat -b <broker> -t <topic> -p <partition>
 or:
  kcat -P -b ...

Metadata listing:
  kcat -L -b <broker> [-t <topic>]

Query offset by timestamp:
  kcat -Q -b broker -t <topic>:<partition>:<timestamp>


<think>好的,我现在需要解释一下用户提供的Kafka控制台消费者命令。首先,我要分解每个参数的作用,确保自己理解正确。 用户使用的命令是kafka-console-consumer.sh,这是Kafka自带的命令行工具,用于从主题中消费消息。接下来是--bootstrap-server参数,后面跟着IP和端口,应该是Kafka集群的地址。这里要注意IP是否正确,端口是否开放,可能涉及网络配置问题。 然后是--consumer.config,指定了一个配置文件路径。这个文件应该包含消费者的配置,比如SASL认证信息,尤其是如果Kafka启用了安全机制的话。需要确认该路径是否存在,权限是否足够,以及配置文件内容是否正确,比如是否有用户名、密码、安全协议等。 --topic参数后面跟着的是主题名称crs2-zy-cdn,需要确保该主题确实存在,并且用户有权限访问。可能需要检查主题名称是否拼写正确,或者是否有命名空间的问题。 --max-messages 5表示只消费5条消息然后退出。这个参数适用于测试或查看少量数据的情况,但要注意如果消费者组已经提交了偏移量,再次运行可能会从上次的位置继续,不过这里可能没有指定消费者组,所以默认可能是新的消费者组,每次重新开始。 接下来要确认各个参数组合后的整体效果。命令的作用是从指定主题消费5条消息,使用提供的配置文件进行认证和配置。可能的常见问题包括配置错误、网络不可达、主题不存在、认证失败等。需要提示用户检查这些方面,比如确认配置文件中的SASL设置,检查网络连接,确认主题是否存在,以及权限是否正确。 另外,可能需要解释SASL.properties的内容,比如是否有sasl.mechanism、security.protocol等关键配置项。如果用户遇到连接问题,建议他们验证这些配置是否正确,或者尝试用其他工具kcat测试连接。 最后,确保解释清晰,分步骤说明每个参数的作用,以及可能遇到的问题和解决方法,帮助用户理解命令的用途和执行过程。</think>以下是对该命令的逐步解释: 1. **命令主体** `kafka-console-consumer.sh` 是 Kafka 自带的命令行工具,用于从指定主题消费消息。 2. **关键参数解析** - `--bootstrap-server 10.104.170.104:19090` 指定 Kafka 集群的连接地址,格式为 `IP:端口`。此处表示目标 Kafka Broker 的 IP 是 `10.104.170.104`,监听端口是 `19090`。 - `--consumer.config /home/cbs/cbs_kafka_outer/config/sasl.properties` 指定消费者配置文件路径。该文件通常包含安全认证参数,例如: ```properties security.protocol=SASL_PLAINTEXT # 安全协议类型 sasl.mechanism=PLAIN # SASL 认证机制 sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="xxx" password="xxx"; ``` - `--topic crs2-zy-cdn` 指定要消费的主题名称。需确保该主题已存在且有权访问。 - `--max-messages 5` 限制最多消费 5 条消息后自动退出。常用于测试或快速验证数据。 3. **命令行为** 此命令会从 `crs2-zy-cdn` 主题拉取最新未消费的 5 条消息(默认从最新偏移量开始),通过 SASL 认证连接到指定 Broker,并在控制台输出消息内容。 4. **常见问题排查** - **连接失败** 检查 IP/端口是否可达,确认防火墙规则,验证 `sasl.properties` 中的用户名/密码是否正确。 - **主题不存在** 运行 `kafka-topics.sh --list --bootstrap-server 10.104.170.104:19090` 查看所有主题。 - **认证错误** 检查 `sasl.properties` 中 SASL 机制是否与 Broker 配置匹配(如 PLAIN/SCRAM)。 5. **扩展操作示例** ```bash # 从头开始消费(重置偏移量) kafka-console-consumer.sh ... --from-beginning # 指定消费者组 kafka-console-consumer.sh ... --group test-group ```
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值