Kafka and So on.

本文围绕Kafka展开,介绍了其消息顺序消费的方法,如通过partitionKey保证顺序;阐述了pull与push原理及优缺点;分析了数据丢失、积压和倾斜的情况及解决办法;还对比了Kafka与RabbitMQ、ZeroMQ、Redis等MQ的优缺点,展现了Kafka的特性。

Kafka消息顺序消费

Kafka分区的目的
• 分区对于 Kafka 集群的好处是:实现负载均衡
• 分区对于消费者来说:可以提高并发度,提高效率

Kafka如何做到全局消息有序:
  一个topic、一个分区,虽然保证全局有序,但是生产性能下降,与Kafka分区原意相违。Kafka是无法保证全局的消息顺序性的,只能保证主题的某个分区的消息顺序性。Kafka中的每个 partition 中的消息在写入时都是有序的,而且单独一个 partition 只能由一个消费者去消费,可以在里面保证消息的顺序性。但是分区之间的消息是不保证有序的 。

• Kafka如何保证数据的顺序消费:
  Kafka的顺序消息仅仅是通过partitionKey,将某类消息写入同一个partition,一个partition只能对应一个消费线程,以保证数据有序,除了发送消息需要指定partitionKey外,producer和consumer实例化无区别

• 多线程进行消息消费如何保证顺序?
在这里插入图片描述
解决办法
  为了保证一个消费者中多个线程去处理时,不会使得消息的顺序被打乱,则可以在消费者中,消息分发至不同的线程时,加一个队列,消费者去做hash分发,将需要放在一起的数据,分发至同一个队列中,最后多个线程从队列中取数据,如下图所示。
在这里插入图片描述

Kafka原理pull与push

Kafka选择由producer向broker push消息并由consumer从broker pull消息。
push
优点:生产者主动推送给消费者,及时性很高
缺点:push模式很难适应消费速率不同的消费者,因为消息发送速率是由broker决定的。push模式的目标是尽可能以最快速度传递消息,但是这样很容易造成consumer来不及处理消息,典型的表现就是拒绝服务以及网络拥塞。如果push的速度太慢,容易造成消费者性能浪费。

pull
优点:可以根据consumer的消费能力以适当的速率消费消息。
缺点:拉取消息的间隔不太好设置。间隔太短,对服务器请求压力过大。间隔时间过长,那么必然会造成一部分数据的延迟。实时性相对较低。

Kafka丢失数据的情况

• producer配置acks=0 、1
• kafka的数据一开始就是存储在PageCache上的,定期flush到磁盘上的,也就是说,不是每个消息都被存储在磁盘了,如果出现断电或者机器故障等,PageCache上的数据就丢失了
• 网络负载很高或者磁盘很忙写入失败的情况下(无重发重试)
• 消费者崩溃,还没有处理的数据就被commit offset了

Kafka数据积压、数据倾斜

Kafka如果生产端并发量很高,broker不能承受,怎么解决这个问题?
(1)数据积压如果是Kafka消费能力不足,则可以考虑增加 topic 的 partition 的个数,同时提升消费者组的消费者数量,消费数 = 分区数 (二者缺一不可)
(2)若是下游数据处理不及时,则提高每批次拉取的数量。批次拉取数量过少(拉取数据/处理时间 < 生产速度),使处理的数据小于生产的数据,也会造成数据积压

数据倾斜
可以考虑增加 topic 的 partition 的个数

Kafka和其他的MQ相比的优势

与其他MQ相比较,Kafka有一些优缺点,主要如下:
优点
1、可扩展:Kafka集群可以透明的扩展,增加新的服务器进集群。
2、高性能:Kafka性能远超过传统的ActiveMQ、RabbitMQ等,Kafka支持Batch操作
3、容错性:Kafka每个Partition数据会复制到几台服务器,当某个Broker失效时,Zookeeper将通知生产者和消费者从而使用其他的Broker。

缺点
1、重复消息:Kafka保证每条消息至少送达一次,虽然几率很小,但一条消息可能被送达多次
2、消息乱序:Kafka某一个固定的Partition内部的消息是保证有序的,如果一个Topic有多个Partition,partition之间的消息送达不保证有序
3、复杂性。Kafka需要Zookeeper的支持,Topic一般需要人工创建,部署和维护比一般MQ成本更高

RabbitMQ
  遵循AMQP实现,传统的messaging queue系统实现,基于Erlang语言开发,用在对数据一致性、稳定性和可靠性要求很高的场景,对性能和吞吐量还在其次。支持协议还包括XMPP、SMTP、STOMP,是一款重量级MQ,更适合于企业级的开发。实现Broker构架,消息在发送给客户端时先在中心队列排队。对路由、负载均衡及数据持久化都有良好的支持。

ZeroMQ
  只是一个网络编程的Pattern库,将常见的网络请求形式模式化、组件化。ZeroMQ能实现RabbitMQ不擅长的高级复杂队列,但开发人员需要自己组合多种技术框架,技术复杂度是一个挑战。仅提供非持久性的队列,如果Down机,数据将丢失。

Redis
  Key-Value的NoSQL数据库,本身也支持MQ功能,可以完全当做一个轻量级的队列使用。Redis在数据量大的时候入队较慢,Redis出队则无论数据量大小性能都不错。

消息中间件的作用:
1、解耦
2、数据冗余存储:有些情况下处理数据的过程会失败,造成数据丢失,可使用消息中间件进行数据持久化
3、扩展性:消息中间件解耦了应用的处理过程,所以提高消息入队和处理的效率是很容易的
4、削峰:在访问量剧增的情况下,程序不会因为突发的超负荷请求而崩溃
5、异步通信:消息中间件提供了异步处理机制,提高了性能

kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.clients.consumer.ConsumerConfig.371] - ConsumerConfig values: \n\tallow.auto.create.topics = true\n\tauto.commit.interval.ms = 5000\n\tauto.include.jmx.reporter = true\n\tauto.offset.reset = latest\n\tbootstrap.servers = [7.192.149.62:9092, 7.192.148.38:9092, 7.192.149.235:9092, 7.192.148.14:9092, 7.192.148.143:9092, 7.192.148.141:9092, 7.192.148.70:9092, 7.192.149.111:9092, 7.192.150.201:9092, 7.192.149.92:9092]\n\tcheck.crcs = true\n\tclient.dns.lookup = use_all_dns_ips\n\tclient.id = consumer-apigc-consumer-98757-112564\n\tclient.rack = \n\tconnections.max.idle.ms = 540000\n\tdefault.api.timeout.ms = 60000\n\tenable.auto.commit = false\n\tenable.metrics.push = true\n\texclude.internal.topics = true\n\tfetch.max.bytes = 52428800\n\tfetch.max.wait.ms = 60000\n\tfetch.min.bytes = 1\n\tgroup.id = apigc-consumer-98757\n\tgroup.instance.id = null\n\tgroup.protocol = classic\n\tgroup.remote.assignor = null\n\theartbeat.interval.ms = 3000\n\tinterceptor.classes = []\n\tinternal.leave.group.on.close = true\n\tinternal.throw.on.fetch.stable.offset.unsupported = false\n\tisolation.level = read_uncommitted\n\tkey.deserializer = class org.apache.kafka.common.serialization.StringDeserializer\n\tmax.partition.fetch.bytes = 1048576\n\tmax.poll.interval.ms = 600\n\tmax.poll.records = 1000\n\tmetadata.max.age.ms = 300000\n\tmetadata.recovery.strategy = none\n\tmetric.reporters = []\n\tmetrics.num.samples = 2\n\tmetrics.recording.level = INFO\n\tmetrics.sample.window.ms = 30000\n\tpartition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor]\n\treceive.buffer.bytes = 65536\n\treconnect.backoff.max.ms = 1000\n\treconnect.backoff.ms = 50\n\trequest.timeout.ms = 70000\n\tretry.backoff.max.ms = 1000\n\tretry.backoff.ms = 100\n\tsasl.client.callback.handler.class = null\n\tsasl.jaas.config = null\n\tsasl.kerberos.kinit.cmd = /usr/bin/kinit\n\tsasl.kerberos.min.time.before.relogin = 60000\n\tsasl.kerberos.service.name = null\n\tsasl.kerberos.ticket.renew.jitter = 0.05\n\tsasl.kerberos.ticket.renew.window.factor = 0.8\n\tsasl.login.callback.handler.class = null\n\tsasl.login.class = null\n\tsasl.login.connect.timeout.ms = null\n\tsasl.login.read.timeout.ms = null\n\tsasl.login.refresh.buffer.seconds = 300\n\tsasl.login.refresh.min.period.seconds = 60\n\tsasl.login.refresh.window.factor = 0.8\n\tsasl.login.refresh.window.jitter = 0.05\n\tsasl.login.retry.backoff.max.ms = 10000\n\tsasl.login.retry.backoff.ms = 100\n\tsasl.mechanism = GSSAPI\n\tsasl.oauthbearer.clock.skew.seconds = 30\n\tsasl.oauthbearer.expected.audience = null\n\tsasl.oauthbearer.expected.issuer = null\n\tsasl.oauthbearer.header.urlencode = false\n\tsasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000\n\tsasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000\n\tsasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100\n\tsasl.oauthbearer.jwks.endpoint.url = null\n\tsasl.oauthbearer.scope.claim.name = scope\n\tsasl.oauthbearer.sub.claim.name = sub\n\tsasl.oauthbearer.token.endpoint.url = null\n\tsecurity.protocol = PLAINTEXT\n\tsecurity.providers = null\n\tsend.buffer.bytes = 131072\n\tsession.timeout.ms = 60000\n\tsocket.connection.setup.timeout.max.ms = 30000\n\tsocket.connection.setup.timeout.ms = 10000\n\tssl.cipher.suites = null\n\tssl.enabled.protocols = [TLSv1.2]\n\tssl.endpoint.identification.algorithm = https\n\tssl.engine.factory.class = null\n\tssl.key.password = null\n\tssl.keymanager.algorithm = SunX509\n\tssl.keystore.certificate.chain = null\n\tssl.keystore.key = null\n\tssl.keystore.location = null\n\tssl.keystore.password = null\n\tssl.keystore.type = JKS\n\tssl.protocol = TLSv1.2\n\tssl.provider = null\n\tssl.secure.random.implementation = null\n\tssl.trustmanager.algorithm = PKIX\n\tssl.truststore.certificates = null\n\tssl.truststore.location = null\n\tssl.truststore.password = null\n\tssl.truststore.type = JKS\n\tvalue.deserializer = class org.apache.kafka.common.serialization.StringDeserializer\n [2025-12-11 00:01:00.555] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.common.telemetry.internals.KafkaMetricsCollector.270] - initializing Kafka metrics collector [2025-12-11 00:01:00.557] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.utils.AppInfoParser.125] - Kafka version: 3.9.1 [2025-12-11 00:01:00.557] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.utils.AppInfoParser.126] - Kafka commitId: f745dfdcee2b9851 [2025-12-11 00:01:00.557] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.utils.AppInfoParser.127] - Kafka startTimeMs: 1765382460557 [2025-12-11 00:01:00.557] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ClassicKafkaConsumer.481] - [Consumer clientId=consumer-apigc-consumer-98757-112564, groupId=apigc-consumer-98757] Subscribed to topic(s): idiag-udp-800-stat [2025-12-11 00:01:00.578] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.clients.consumer.ConsumerConfig.371] - ConsumerConfig values: \n\tallow.auto.create.topics = true\n\tauto.commit.interval.ms = 5000\n\tauto.include.jmx.reporter = true\n\tauto.offset.reset = latest\n\tbootstrap.servers = [7.192.149.62:9092, 7.192.148.38:9092, 7.192.149.235:9092, 7.192.148.14:9092, 7.192.148.143:9092, 7.192.148.141:9092, 7.192.148.70:9092, 7.192.149.111:9092, 7.192.150.201:9092, 7.192.149.92:9092]\n\tcheck.crcs = true\n\tclient.dns.lookup = use_all_dns_ips\n\tclient.id = consumer-apigc-consumer-98757-112565\n\tclient.rack = \n\tconnections.max.idle.ms = 540000\n\tdefault.api.timeout.ms = 60000\n\tenable.auto.commit = false\n\tenable.metrics.push = true\n\texclude.internal.topics = true\n\tfetch.max.bytes = 52428800\n\tfetch.max.wait.ms = 60000\n\tfetch.min.bytes = 1\n\tgroup.id = apigc-consumer-98757\n\tgroup.instance.id = null\n\tgroup.protocol = classic\n\tgroup.remote.assignor = null\n\theartbeat.interval.ms = 3000\n\tinterceptor.classes = []\n\tinternal.leave.group.on.close = true\n\tinternal.throw.on.fetch.stable.offset.unsupported = false\n\tisolation.level = read_uncommitted\n\tkey.deserializer = class org.apache.kafka.common.serialization.StringDeserializer\n\tmax.partition.fetch.bytes = 1048576\n\tmax.poll.interval.ms = 600\n\tmax.poll.records = 1000\n\tmetadata.max.age.ms = 300000\n\tmetadata.recovery.strategy = none\n\tmetric.reporters = []\n\tmetrics.num.samples = 2\n\tmetrics.recording.level = INFO\n\tmetrics.sample.window.ms = 30000\n\tpartition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor]\n\treceive.buffer.bytes = 65536\n\treconnect.backoff.max.ms = 1000\n\treconnect.backoff.ms = 50\n\trequest.timeout.ms = 70000\n\tretry.backoff.max.ms = 1000\n\tretry.backoff.ms = 100\n\tsasl.client.callback.handler.class = null\n\tsasl.jaas.config = null\n\tsasl.kerberos.kinit.cmd = /usr/bin/kinit\n\tsasl.kerberos.min.time.before.relogin = 60000\n\tsasl.kerberos.service.name = null\n\tsasl.kerberos.ticket.renew.jitter = 0.05\n\tsasl.kerberos.ticket.renew.window.factor = 0.8\n\tsasl.login.callback.handler.class = null\n\tsasl.login.class = null\n\tsasl.login.connect.timeout.ms = null\n\tsasl.login.read.timeout.ms = null\n\tsasl.login.refresh.buffer.seconds = 300\n\tsasl.login.refresh.min.period.seconds = 60\n\tsasl.login.refresh.window.factor = 0.8\n\tsasl.login.refresh.window.jitter = 0.05\n\tsasl.login.retry.backoff.max.ms = 10000\n\tsasl.login.retry.backoff.ms = 100\n\tsasl.mechanism = GSSAPI\n\tsasl.oauthbearer.clock.skew.seconds = 30\n\tsasl.oauthbearer.expected.audience = null\n\tsasl.oauthbearer.expected.issuer = null\n\tsasl.oauthbearer.header.urlencode = false\n\tsasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000\n\tsasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000\n\tsasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100\n\tsasl.oauthbearer.jwks.endpoint.url = null\n\tsasl.oauthbearer.scope.claim.name = scope\n\tsasl.oauthbearer.sub.claim.name = sub\n\tsasl.oauthbearer.token.endpoint.url = null\n\tsecurity.protocol = PLAINTEXT\n\tsecurity.providers = null\n\tsend.buffer.bytes = 131072\n\tsession.timeout.ms = 60000\n\tsocket.connection.setup.timeout.max.ms = 30000\n\tsocket.connection.setup.timeout.ms = 10000\n\tssl.cipher.suites = null\n\tssl.enabled.protocols = [TLSv1.2]\n\tssl.endpoint.identification.algorithm = https\n\tssl.engine.factory.class = null\n\tssl.key.password = null\n\tssl.keymanager.algorithm = SunX509\n\tssl.keystore.certificate.chain = null\n\tssl.keystore.key = null\n\tssl.keystore.location = null\n\tssl.keystore.password = null\n\tssl.keystore.type = JKS\n\tssl.protocol = TLSv1.2\n\tssl.provider = null\n\tssl.secure.random.implementation = null\n\tssl.trustmanager.algorithm = PKIX\n\tssl.truststore.certificates = null\n\tssl.truststore.location = null\n\tssl.truststore.password = null\n\tssl.truststore.type = JKS\n\tvalue.deserializer = class org.apache.kafka.common.serialization.StringDeserializer\n [2025-12-11 00:01:00.578] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.common.telemetry.internals.KafkaMetricsCollector.270] - initializing Kafka metrics collector [2025-12-11 00:01:00.580] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.utils.AppInfoParser.125] - Kafka version: 3.9.1 [2025-12-11 00:01:00.580] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.utils.AppInfoParser.126] - Kafka commitId: f745dfdcee2b9851 [2025-12-11 00:01:00.580] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.utils.AppInfoParser.127] - Kafka startTimeMs: 1765382460580 [2025-12-11 00:01:00.580] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ClassicKafkaConsumer.481] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Subscribed to topic(s): T_gm_instance [2025-12-11 00:01:00.583] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.clients.Metadata.365] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Cluster ID: qO4zHm1-Tj-XzNnkbzMBPQ [2025-12-11 00:01:00.584] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.937] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Discovered group coordinator 7.192.148.38:9092 (id: 2147483646 rack: null) [2025-12-11 00:01:00.584] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.605] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] (Re-)joining group [2025-12-11 00:01:01.649] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.clients.Metadata.365] - [Consumer clientId=consumer-apigc-consumer-98757-112564, groupId=apigc-consumer-98757] Cluster ID: qO4zHm1-Tj-XzNnkbzMBPQ [2025-12-11 00:01:01.650] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.937] - [Consumer clientId=consumer-apigc-consumer-98757-112564, groupId=apigc-consumer-98757] Discovered group coordinator 7.192.148.38:9092 (id: 2147483646 rack: null) [2025-12-11 00:01:01.650] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.605] - [Consumer clientId=consumer-apigc-consumer-98757-112564, groupId=apigc-consumer-98757] (Re-)joining group [2025-12-11 00:01:05.579] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.666] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] Successfully joined group with generation Generation{generationId=2895172, memberId='consumer-apigc-consumer-98757-2-bb905e91-d34a-4780-86fa-e451d39ef146', protocol='range'} [2025-12-11 00:01:05.579] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.666] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Successfully joined group with generation Generation{generationId=2895172, memberId='consumer-apigc-consumer-98757-112565-eaac94b3-e477-4f01-9707-cbac065c041b', protocol='range'} [2025-12-11 00:01:05.585] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.664] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] Finished assignment for group at generation 2895172: {consumer-apigc-consumer-98757-112902-de6bd48e-6f45-48a9-ba11-660715428234=Assignment(partitions=[idiag-udp-800-stat-2, idiag-udp-800-stat-3]), consumer-apigc-consumer-98757-1-db6078ce-d505-43c5-80ad-d0e6d164d58d=Assignment(partitions=[idiag-udp-800-subapp-stat-0, idiag-udp-800-subapp-stat-1]), consumer-apigc-consumer-98757-112564-929082bc-0882-4835-b7a3-f15ad5de08c8=Assignment(partitions=[idiag-udp-800-stat-0, idiag-udp-800-stat-1]), consumer-apigc-consumer-98757-112903-facff737-fea5-43ab-9db9-383115d4834c=Assignment(partitions=[T_gm_instance-2]), consumer-apigc-consumer-98757-112565-eaac94b3-e477-4f01-9707-cbac065c041b=Assignment(partitions=[T_gm_instance-0, T_gm_instance-1]), consumer-apigc-consumer-98757-2-bb905e91-d34a-4780-86fa-e451d39ef146=Assignment(partitions=[idiag-udp-800-subapp-stat-2, idiag-udp-800-subapp-stat-3])} [2025-12-11 00:01:05.592] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.843] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] Successfully synced group in generation Generation{generationId=2895172, memberId='consumer-apigc-consumer-98757-2-bb905e91-d34a-4780-86fa-e451d39ef146', protocol='range'} [2025-12-11 00:01:05.592] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.843] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Successfully synced group in generation Generation{generationId=2895172, memberId='consumer-apigc-consumer-98757-112565-eaac94b3-e477-4f01-9707-cbac065c041b', protocol='range'} [2025-12-11 00:01:05.593] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.324] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] Notifying assignor about the new Assignment(partitions=[idiag-udp-800-subapp-stat-2, idiag-udp-800-subapp-stat-3]) [2025-12-11 00:01:05.593] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.324] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Notifying assignor about the new Assignment(partitions=[T_gm_instance-0, T_gm_instance-1]) [2025-12-11 00:01:05.593] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.k.c.consumer.internals.ConsumerRebalanceListenerInvoker.58] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] Adding newly assigned partitions: idiag-udp-800-subapp-stat-2, idiag-udp-800-subapp-stat-3 [2025-12-11 00:01:05.593] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.k.c.consumer.internals.ConsumerRebalanceListenerInvoker.58] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Adding newly assigned partitions: T_gm_instance-0, T_gm_instance-1 [2025-12-11 00:01:05.593] [kafka-topic2-task-pool-0~289] [INFO] [--] [org.apache.kafka.clients.consumer.internals.ConsumerUtils.209] - Setting offset for partition idiag-udp-800-subapp-stat-2 to the committed offset FetchPosition{offset=734291379, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[7.192.148.38:9092 (id: 1 rack: cn-south-3d###115acd0f76614ae1a730ee8722f6a95a)], epoch=absent}} [2025-12-11 00:01:05.593] [kafka-topic2-task-pool-0~289] [INFO] [--] [org.apache.kafka.clients.consumer.internals.ConsumerUtils.209] - Setting offset for partition idiag-udp-800-subapp-stat-3 to the committed offset FetchPosition{offset=733888674, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[7.192.148.14:9092 (id: 3 rack: cn-south-3a###3a28bd93d53547aa9c109199039e5edd)], epoch=absent}} [2025-12-11 00:01:05.593] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.clients.consumer.internals.ConsumerUtils.209] - Setting offset for partition T_gm_instance-1 to the committed offset FetchPosition{offset=27759090, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[7.192.148.141:9092 (id: 5 rack: cn-south-3b###197dfeb3f3b84540a4c1e06f954302dc)], epoch=absent}} [2025-12-11 00:01:05.594] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.clients.consumer.internals.ConsumerUtils.209] - Setting offset for partition T_gm_instance-0 to the committed offset FetchPosition{offset=27774533, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[7.192.148.14:9092 (id: 3 rack: cn-south-3a###3a28bd93d53547aa9c109199039e5edd)], epoch=absent}} [2025-12-11 00:01:05.681] [kafka-topic2-task-pool-0~289] [INFO] [--] [c.h.it.gaia.apigc.service.task.KafkaTopic2DataConsumeService.208] - save apiInvokeDataDOList size:95 to topic2 tbl a. [2025-12-11 00:01:06.053] [kafka-topic2-task-pool-0~289] [INFO] [--] [c.h.it.gaia.apigc.service.task.KafkaTopic2DataConsumeService.208] - save apiInvokeDataDOList size:600 to topic2 tbl a. [2025-12-11 00:01:06.176] [kafka-topic1-task-pool-0~289] [INFO] [--] [SqlLog@com.huawei.125] - Cost 525ms | com.huawei.it.gaia.apigc.infrastructure.mapper.IInvokeRelationMapper.insertInstanceMapping | printing sql is not allowed [2025-12-11 00:01:06.176] [kafka-topic1-task-pool-0~289] [INFO] [--] [c.huawei.it.gaia.apigc.service.task.KafkaDataConsumeService.313] - insert instanceMappingDOList size:10968 [2025-12-11 00:01:06.292] [kafka-coordinator-heartbeat-thread | apigc-consumer-98757~289] [WARN] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.1148] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. [2025-12-11 00:01:06.292] [kafka-coordinator-heartbeat-thread | apigc-consumer-98757~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.1174] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Member consumer-apigc-consumer-98757-112565-eaac94b3-e477-4f01-9707-cbac065c041b sending LeaveGroup request to coordinator 7.192.148.38:9092 (id: 2147483646 rack: null) due to consumer poll timeout has expired. [2025-12-11 00:01:06.294] [kafka-coordinator-heartbeat-thread | apigc-consumer-98757~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.1056] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Resetting generation and member id due to: consumer pro-actively leaving the group [2025-12-11 00:01:06.294] [kafka-coordinator-heartbeat-thread | apigc-consumer-98757~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.1103] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Request joining group due to: consumer pro-actively leaving the group [2025-12-11 00:01:06.353] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.666] - [Consumer clientId=consumer-apigc-consumer-98757-112564, groupId=apigc-consumer-98757] Successfully joined group with generation Generation{generationId=2895172, memberId='consumer-apigc-consumer-98757-112564-929082bc-0882-4835-b7a3-f15ad5de08c8', protocol='range'} [2025-12-11 00:01:06.358] [kafka-coordinator-heartbeat-thread | apigc-consumer-98757~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.867] - [Consumer clientId=consumer-apigc-consumer-98757-112564, groupId=apigc-consumer-98757] SyncGroup failed: The group began another rebalance. Need to re-join the group. Sent generation was Generation{generationId=2895172, memberId='consumer-apigc-consumer-98757-112564-929082bc-0882-4835-b7a3-f15ad5de08c8', protocol='range'} [2025-12-11 00:01:06.358] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.1103] - [Consumer clientId=consumer-apigc-consumer-98757-112564, groupId=apigc-consumer-98757] Request joining group due to: rebalance failed due to 'The group is rebalancing, so a rejoin is needed.' (RebalanceInProgressException) [2025-12-11 00:01:06.358] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.605] - [Consumer clientId=consumer-apigc-consumer-98757-112564, groupId=apigc-consumer-98757] (Re-)joining group [2025-12-11 00:01:07.353] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.1271] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Failing OffsetCommit request since the consumer is not part of an active group [2025-12-11 00:01:07.354] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.k.c.consumer.internals.ConsumerRebalanceListenerInvoker.106] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Lost previously assigned partitions T_gm_instance-0, T_gm_instance-1 [2025-12-11 00:01:07.354] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.1056] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Resetting generation and member id due to: consumer pro-actively leaving the group [2025-12-11 00:01:07.354] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.1103] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Request joining group due to: consumer pro-actively leaving the group [2025-12-11 00:01:07.357] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.clients.FetchSessionHandler.556] - [Consumer clientId=consumer-apigc-consumer-98757-112565, groupId=apigc-consumer-98757] Node 3 sent an invalid full fetch response with extraPartitions=(T_gm_instance-0), response=(T_gm_instance-0) [2025-12-11 00:01:07.357] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.metrics.Metrics.685] - Metrics scheduler closed [2025-12-11 00:01:07.357] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.metrics.Metrics.689] - Closing reporter org.apache.kafka.common.metrics.JmxReporter [2025-12-11 00:01:07.357] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.metrics.Metrics.689] - Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [2025-12-11 00:01:07.358] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.metrics.Metrics.695] - Metrics reporters closed [2025-12-11 00:01:07.359] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.utils.AppInfoParser.89] - App info kafka.consumer for consumer-apigc-consumer-98757-112565 unregistered [2025-12-11 00:01:07.360] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.1174] - [Consumer clientId=consumer-apigc-consumer-98757-112564, groupId=apigc-consumer-98757] Member consumer-apigc-consumer-98757-112564-929082bc-0882-4835-b7a3-f15ad5de08c8 sending LeaveGroup request to coordinator 7.192.148.38:9092 (id: 2147483646 rack: null) due to the consumer is being closed [2025-12-11 00:01:07.360] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.1056] - [Consumer clientId=consumer-apigc-consumer-98757-112564, groupId=apigc-consumer-98757] Resetting generation and member id due to: consumer pro-actively leaving the group [2025-12-11 00:01:07.361] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.1103] - [Consumer clientId=consumer-apigc-consumer-98757-112564, groupId=apigc-consumer-98757] Request joining group due to: consumer pro-actively leaving the group [2025-12-11 00:01:08.581] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.1103] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] Request joining group due to: group is already rebalancing [2025-12-11 00:01:08.581] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.k.c.consumer.internals.ConsumerRebalanceListenerInvoker.80] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] Revoke previously assigned partitions idiag-udp-800-subapp-stat-2, idiag-udp-800-subapp-stat-3 [2025-12-11 00:01:08.581] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.605] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] (Re-)joining group [2025-12-11 00:01:08.700] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.666] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] Successfully joined group with generation Generation{generationId=2895173, memberId='consumer-apigc-consumer-98757-2-bb905e91-d34a-4780-86fa-e451d39ef146', protocol='range'} [2025-12-11 00:01:08.701] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.metrics.Metrics.685] - Metrics scheduler closed [2025-12-11 00:01:08.701] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.metrics.Metrics.689] - Closing reporter org.apache.kafka.common.metrics.JmxReporter [2025-12-11 00:01:08.701] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.metrics.Metrics.689] - Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [2025-12-11 00:01:08.701] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.metrics.Metrics.695] - Metrics reporters closed [2025-12-11 00:01:08.702] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.664] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] Finished assignment for group at generation 2895173: {consumer-apigc-consumer-98757-112902-39ef1421-662a-4e3f-a94a-d3c0df2bb0b2=Assignment(partitions=[idiag-udp-800-stat-2, idiag-udp-800-stat-3]), consumer-apigc-consumer-98757-1-db6078ce-d505-43c5-80ad-d0e6d164d58d=Assignment(partitions=[idiag-udp-800-subapp-stat-0, idiag-udp-800-subapp-stat-1]), consumer-apigc-consumer-98757-112564-929082bc-0882-4835-b7a3-f15ad5de08c8=Assignment(partitions=[idiag-udp-800-stat-0, idiag-udp-800-stat-1]), consumer-apigc-consumer-98757-2-bb905e91-d34a-4780-86fa-e451d39ef146=Assignment(partitions=[idiag-udp-800-subapp-stat-2, idiag-udp-800-subapp-stat-3])} [2025-12-11 00:01:08.702] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.867] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] SyncGroup failed: The group began another rebalance. Need to re-join the group. Sent generation was Generation{generationId=2895173, memberId='consumer-apigc-consumer-98757-2-bb905e91-d34a-4780-86fa-e451d39ef146', protocol='range'} [2025-12-11 00:01:08.702] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.1103] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] Request joining group due to: rebalance failed due to 'The group is rebalancing, so a rejoin is needed.' (RebalanceInProgressException) [2025-12-11 00:01:08.702] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.605] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] (Re-)joining group [2025-12-11 00:01:08.702] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.utils.AppInfoParser.89] - App info kafka.consumer for consumer-apigc-consumer-98757-112564 unregistered [2025-12-11 00:01:08.703] [kafka-topic1-task-pool-0~289] [ERROR] [--] [c.huawei.it.gaia.apigc.service.task.KafkaDataConsumeService.159] - error during consume data from kafka and task stop!. [2025-12-11 00:02:00.562] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.clients.consumer.ConsumerConfig.371] - ConsumerConfig values: \n\tallow.auto.create.topics = true\n\tauto.commit.interval.ms = 5000\n\tauto.include.jmx.reporter = true\n\tauto.offset.reset = latest\n\tbootstrap.servers = [7.192.149.62:9092, 7.192.148.38:9092, 7.192.149.235:9092, 7.192.148.14:9092, 7.192.148.143:9092, 7.192.148.141:9092, 7.192.148.70:9092, 7.192.149.111:9092, 7.192.150.201:9092, 7.192.149.92:9092]\n\tcheck.crcs = true\n\tclient.dns.lookup = use_all_dns_ips\n\tclient.id = consumer-apigc-consumer-98757-112566\n\tclient.rack = \n\tconnections.max.idle.ms = 540000\n\tdefault.api.timeout.ms = 60000\n\tenable.auto.commit = false\n\tenable.metrics.push = true\n\texclude.internal.topics = true\n\tfetch.max.bytes = 52428800\n\tfetch.max.wait.ms = 60000\n\tfetch.min.bytes = 1\n\tgroup.id = apigc-consumer-98757\n\tgroup.instance.id = null\n\tgroup.protocol = classic\n\tgroup.remote.assignor = null\n\theartbeat.interval.ms = 3000\n\tinterceptor.classes = []\n\tinternal.leave.group.on.close = true\n\tinternal.throw.on.fetch.stable.offset.unsupported = false\n\tisolation.level = read_uncommitted\n\tkey.deserializer = class org.apache.kafka.common.serialization.StringDeserializer\n\tmax.partition.fetch.bytes = 1048576\n\tmax.poll.interval.ms = 600\n\tmax.poll.records = 1000\n\tmetadata.max.age.ms = 300000\n\tmetadata.recovery.strategy = none\n\tmetric.reporters = []\n\tmetrics.num.samples = 2\n\tmetrics.recording.level = INFO\n\tmetrics.sample.window.ms = 30000\n\tpartition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor]\n\treceive.buffer.bytes = 65536\n\treconnect.backoff.max.ms = 1000\n\treconnect.backoff.ms = 50\n\trequest.timeout.ms = 70000\n\tretry.backoff.max.ms = 1000\n\tretry.backoff.ms = 100\n\tsasl.client.callback.handler.class = null\n\tsasl.jaas.config = null\n\tsasl.kerberos.kinit.cmd = /usr/bin/kinit\n\tsasl.kerberos.min.time.before.relogin = 60000\n\tsasl.kerberos.service.name = null\n\tsasl.kerberos.ticket.renew.jitter = 0.05\n\tsasl.kerberos.ticket.renew.window.factor = 0.8\n\tsasl.login.callback.handler.class = null\n\tsasl.login.class = null\n\tsasl.login.connect.timeout.ms = null\n\tsasl.login.read.timeout.ms = null\n\tsasl.login.refresh.buffer.seconds = 300\n\tsasl.login.refresh.min.period.seconds = 60\n\tsasl.login.refresh.window.factor = 0.8\n\tsasl.login.refresh.window.jitter = 0.05\n\tsasl.login.retry.backoff.max.ms = 10000\n\tsasl.login.retry.backoff.ms = 100\n\tsasl.mechanism = GSSAPI\n\tsasl.oauthbearer.clock.skew.seconds = 30\n\tsasl.oauthbearer.expected.audience = null\n\tsasl.oauthbearer.expected.issuer = null\n\tsasl.oauthbearer.header.urlencode = false\n\tsasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000\n\tsasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000\n\tsasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100\n\tsasl.oauthbearer.jwks.endpoint.url = null\n\tsasl.oauthbearer.scope.claim.name = scope\n\tsasl.oauthbearer.sub.claim.name = sub\n\tsasl.oauthbearer.token.endpoint.url = null\n\tsecurity.protocol = PLAINTEXT\n\tsecurity.providers = null\n\tsend.buffer.bytes = 131072\n\tsession.timeout.ms = 60000\n\tsocket.connection.setup.timeout.max.ms = 30000\n\tsocket.connection.setup.timeout.ms = 10000\n\tssl.cipher.suites = null\n\tssl.enabled.protocols = [TLSv1.2]\n\tssl.endpoint.identification.algorithm = https\n\tssl.engine.factory.class = null\n\tssl.key.password = null\n\tssl.keymanager.algorithm = SunX509\n\tssl.keystore.certificate.chain = null\n\tssl.keystore.key = null\n\tssl.keystore.location = null\n\tssl.keystore.password = null\n\tssl.keystore.type = JKS\n\tssl.protocol = TLSv1.2\n\tssl.provider = null\n\tssl.secure.random.implementation = null\n\tssl.trustmanager.algorithm = PKIX\n\tssl.truststore.certificates = null\n\tssl.truststore.location = null\n\tssl.truststore.password = null\n\tssl.truststore.type = JKS\n\tvalue.deserializer = class org.apache.kafka.common.serialization.StringDeserializer\n [2025-12-11 00:02:00.562] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.common.telemetry.internals.KafkaMetricsCollector.270] - initializing Kafka metrics collector [2025-12-11 00:02:00.564] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.utils.AppInfoParser.125] - Kafka version: 3.9.1 [2025-12-11 00:02:00.564] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.utils.AppInfoParser.126] - Kafka commitId: f745dfdcee2b9851 [2025-12-11 00:02:00.564] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.utils.AppInfoParser.127] - Kafka startTimeMs: 1765382520564 [2025-12-11 00:02:00.565] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ClassicKafkaConsumer.481] - [Consumer clientId=consumer-apigc-consumer-98757-112566, groupId=apigc-consumer-98757] Subscribed to topic(s): idiag-udp-800-stat [2025-12-11 00:02:00.585] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.clients.consumer.ConsumerConfig.371] - ConsumerConfig values: \n\tallow.auto.create.topics = true\n\tauto.commit.interval.ms = 5000\n\tauto.include.jmx.reporter = true\n\tauto.offset.reset = latest\n\tbootstrap.servers = [7.192.149.62:9092, 7.192.148.38:9092, 7.192.149.235:9092, 7.192.148.14:9092, 7.192.148.143:9092, 7.192.148.141:9092, 7.192.148.70:9092, 7.192.149.111:9092, 7.192.150.201:9092, 7.192.149.92:9092]\n\tcheck.crcs = true\n\tclient.dns.lookup = use_all_dns_ips\n\tclient.id = consumer-apigc-consumer-98757-112567\n\tclient.rack = \n\tconnections.max.idle.ms = 540000\n\tdefault.api.timeout.ms = 60000\n\tenable.auto.commit = false\n\tenable.metrics.push = true\n\texclude.internal.topics = true\n\tfetch.max.bytes = 52428800\n\tfetch.max.wait.ms = 60000\n\tfetch.min.bytes = 1\n\tgroup.id = apigc-consumer-98757\n\tgroup.instance.id = null\n\tgroup.protocol = classic\n\tgroup.remote.assignor = null\n\theartbeat.interval.ms = 3000\n\tinterceptor.classes = []\n\tinternal.leave.group.on.close = true\n\tinternal.throw.on.fetch.stable.offset.unsupported = false\n\tisolation.level = read_uncommitted\n\tkey.deserializer = class org.apache.kafka.common.serialization.StringDeserializer\n\tmax.partition.fetch.bytes = 1048576\n\tmax.poll.interval.ms = 600\n\tmax.poll.records = 1000\n\tmetadata.max.age.ms = 300000\n\tmetadata.recovery.strategy = none\n\tmetric.reporters = []\n\tmetrics.num.samples = 2\n\tmetrics.recording.level = INFO\n\tmetrics.sample.window.ms = 30000\n\tpartition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor]\n\treceive.buffer.bytes = 65536\n\treconnect.backoff.max.ms = 1000\n\treconnect.backoff.ms = 50\n\trequest.timeout.ms = 70000\n\tretry.backoff.max.ms = 1000\n\tretry.backoff.ms = 100\n\tsasl.client.callback.handler.class = null\n\tsasl.jaas.config = null\n\tsasl.kerberos.kinit.cmd = /usr/bin/kinit\n\tsasl.kerberos.min.time.before.relogin = 60000\n\tsasl.kerberos.service.name = null\n\tsasl.kerberos.ticket.renew.jitter = 0.05\n\tsasl.kerberos.ticket.renew.window.factor = 0.8\n\tsasl.login.callback.handler.class = null\n\tsasl.login.class = null\n\tsasl.login.connect.timeout.ms = null\n\tsasl.login.read.timeout.ms = null\n\tsasl.login.refresh.buffer.seconds = 300\n\tsasl.login.refresh.min.period.seconds = 60\n\tsasl.login.refresh.window.factor = 0.8\n\tsasl.login.refresh.window.jitter = 0.05\n\tsasl.login.retry.backoff.max.ms = 10000\n\tsasl.login.retry.backoff.ms = 100\n\tsasl.mechanism = GSSAPI\n\tsasl.oauthbearer.clock.skew.seconds = 30\n\tsasl.oauthbearer.expected.audience = null\n\tsasl.oauthbearer.expected.issuer = null\n\tsasl.oauthbearer.header.urlencode = false\n\tsasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000\n\tsasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000\n\tsasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100\n\tsasl.oauthbearer.jwks.endpoint.url = null\n\tsasl.oauthbearer.scope.claim.name = scope\n\tsasl.oauthbearer.sub.claim.name = sub\n\tsasl.oauthbearer.token.endpoint.url = null\n\tsecurity.protocol = PLAINTEXT\n\tsecurity.providers = null\n\tsend.buffer.bytes = 131072\n\tsession.timeout.ms = 60000\n\tsocket.connection.setup.timeout.max.ms = 30000\n\tsocket.connection.setup.timeout.ms = 10000\n\tssl.cipher.suites = null\n\tssl.enabled.protocols = [TLSv1.2]\n\tssl.endpoint.identification.algorithm = https\n\tssl.engine.factory.class = null\n\tssl.key.password = null\n\tssl.keymanager.algorithm = SunX509\n\tssl.keystore.certificate.chain = null\n\tssl.keystore.key = null\n\tssl.keystore.location = null\n\tssl.keystore.password = null\n\tssl.keystore.type = JKS\n\tssl.protocol = TLSv1.2\n\tssl.provider = null\n\tssl.secure.random.implementation = null\n\tssl.trustmanager.algorithm = PKIX\n\tssl.truststore.certificates = null\n\tssl.truststore.location = null\n\tssl.truststore.password = null\n\tssl.truststore.type = JKS\n\tvalue.deserializer = class org.apache.kafka.common.serialization.StringDeserializer\n [2025-12-11 00:02:00.585] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.common.telemetry.internals.KafkaMetricsCollector.270] - initializing Kafka metrics collector [2025-12-11 00:02:00.587] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.utils.AppInfoParser.125] - Kafka version: 3.9.1 [2025-12-11 00:02:00.587] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.utils.AppInfoParser.126] - Kafka commitId: f745dfdcee2b9851 [2025-12-11 00:02:00.587] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.common.utils.AppInfoParser.127] - Kafka startTimeMs: 1765382520587 [2025-12-11 00:02:00.588] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ClassicKafkaConsumer.481] - [Consumer clientId=consumer-apigc-consumer-98757-112567, groupId=apigc-consumer-98757] Subscribed to topic(s): T_gm_instance [2025-12-11 00:02:00.590] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.clients.Metadata.365] - [Consumer clientId=consumer-apigc-consumer-98757-112567, groupId=apigc-consumer-98757] Cluster ID: qO4zHm1-Tj-XzNnkbzMBPQ [2025-12-11 00:02:00.591] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.937] - [Consumer clientId=consumer-apigc-consumer-98757-112567, groupId=apigc-consumer-98757] Discovered group coordinator 7.192.148.38:9092 (id: 2147483646 rack: null) [2025-12-11 00:02:00.592] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.605] - [Consumer clientId=consumer-apigc-consumer-98757-112567, groupId=apigc-consumer-98757] (Re-)joining group [2025-12-11 00:02:01.644] [kafka-topic1-task-pool-0~289] [INFO] [--] [org.apache.kafka.clients.Metadata.365] - [Consumer clientId=consumer-apigc-consumer-98757-112566, groupId=apigc-consumer-98757] Cluster ID: qO4zHm1-Tj-XzNnkbzMBPQ [2025-12-11 00:02:01.645] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.937] - [Consumer clientId=consumer-apigc-consumer-98757-112566, groupId=apigc-consumer-98757] Discovered group coordinator 7.192.148.38:9092 (id: 2147483646 rack: null) [2025-12-11 00:02:01.645] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.605] - [Consumer clientId=consumer-apigc-consumer-98757-112566, groupId=apigc-consumer-98757] (Re-)joining group [2025-12-11 00:02:08.701] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.666] - [Consumer clientId=consumer-apigc-consumer-98757-112566, groupId=apigc-consumer-98757] Successfully joined group with generation Generation{generationId=2895174, memberId='consumer-apigc-consumer-98757-112566-4ee0cee5-aa8b-4658-9ea1-58a7f4386d1b', protocol='range'} [2025-12-11 00:02:08.701] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.666] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] Successfully joined group with generation Generation{generationId=2895174, memberId='consumer-apigc-consumer-98757-2-bb905e91-d34a-4780-86fa-e451d39ef146', protocol='range'} [2025-12-11 00:02:08.705] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.664] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] Finished assignment for group at generation 2895174: {consumer-apigc-consumer-98757-112905-f111440a-1007-4008-a792-ea7ebb68cdda=Assignment(partitions=[T_gm_instance-2]), consumer-apigc-consumer-98757-112567-e694f1c9-a263-4558-adc9-ffeb2f3770ac=Assignment(partitions=[T_gm_instance-0, T_gm_instance-1]), consumer-apigc-consumer-98757-1-db6078ce-d505-43c5-80ad-d0e6d164d58d=Assignment(partitions=[idiag-udp-800-subapp-stat-0, idiag-udp-800-subapp-stat-1]), consumer-apigc-consumer-98757-112904-df25af77-1df0-42f7-9d14-181f20aa18d0=Assignment(partitions=[idiag-udp-800-stat-2, idiag-udp-800-stat-3]), consumer-apigc-consumer-98757-112566-4ee0cee5-aa8b-4658-9ea1-58a7f4386d1b=Assignment(partitions=[idiag-udp-800-stat-0, idiag-udp-800-stat-1]), consumer-apigc-consumer-98757-2-bb905e91-d34a-4780-86fa-e451d39ef146=Assignment(partitions=[idiag-udp-800-subapp-stat-2, idiag-udp-800-subapp-stat-3])} [2025-12-11 00:02:08.708] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.843] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] Successfully synced group in generation Generation{generationId=2895174, memberId='consumer-apigc-consumer-98757-2-bb905e91-d34a-4780-86fa-e451d39ef146', protocol='range'} [2025-12-11 00:02:08.708] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.843] - [Consumer clientId=consumer-apigc-consumer-98757-112566, groupId=apigc-consumer-98757] Successfully synced group in generation Generation{generationId=2895174, memberId='consumer-apigc-consumer-98757-112566-4ee0cee5-aa8b-4658-9ea1-58a7f4386d1b', protocol='range'} [2025-12-11 00:02:08.708] [kafka-topic2-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordinator.324] - [Consumer clientId=consumer-apigc-consumer-98757-2, groupId=apigc-consumer-98757] Notifying assignor about the new Assignment(partitions=[idiag-udp-800-subapp-stat-2, idiag-udp-800-subapp-stat-3]) [2025-12-11 00:02:08.708] [kafka-topic1-task-pool-0~289] [INFO] [--] [o.a.kafka.clients.consumer.internals.ConsumerCoordina找一下有什么问题,报错语句
最新发布
12-12
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值