关于kafka Consumer参数:exclude.internal.topics

本文详细解析了Kafka中一个容易被误解的参数——excludeInternalTopics。该参数控制是否允许消费者通过正则表达式订阅的方式接收到内部Topic的消息。文章通过源码分析,澄清了官方文档表述可能引发的混淆。

之前对这个参数一直理解有误,彻底搞清楚后就写个笔记记录一下。(基于kafka1.0)

先看官方文档的解释:Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to true the only way to receive records from an internal topic is subscribing to it.

大意是:这个参数用于是否把内部topic的信息(例如offset)暴露给cosumer,如果设置为true,就只能通过订阅的方式来获取内部topic的数据。

也正是官网的这个解释让我有了错误理解,从参数的字面意思来看是“排除内部topic”,所以一直以为是设置为true就是排除内部topic,设置为false就是暴露内部topic,但实际测试却有不是这样的,于是看了相关的源码:

1.  //ConsumerCoordinator.java
2.  public void updatePatternSubscription(Cluster cluster) {
3.      final Set<String> topicsToSubscribe = new HashSet<>();
4.      //主要是下面这一句
5.      for (String topic : cluster.topics())
6.          if (subscriptions.subscribedPattern().matcher(topic).matches() &&
7.                  !(excludeInternalTopics && cluster.internalTopics().contains(topic)))
8.              topicsToSubscribe.add(topic);
9.  
10.     subscriptions.subscribeFromPattern(topicsToSubscribe);
11. 
    // note we still need to update the topics contained in the metadata. Although we have
    // specified that all topics should be fetched, only those set explicitly will be retained
        metadata.setTopics(subscriptions.groupSubscription());
}

从第6行看,原来还可以用正则的方式来订阅topic (孤陋寡闻了,从来就这么用过),加上第7行的两个条件,就明白了这个参数的意图。

__consumer_offsets是kafka server中一个默认的topic名,当用户有一个topic叫sumer_off,且用正则的方式来订阅topic,如下所示:

Pattern pattern = Pattern.compile(".*umer_of.*");
consumer.subscribe(pattern);

那么正则会匹配到__consumer_offsets这个内部topic,所以就用这个参数来限制是否暴露内部topic,当设置为true是,就算正则匹配中了内部topic,也不会消息,此时只能通过订阅的方式来消费内部topic,当这个参数设置为false时,如果正则匹配到内部topic,就会消费到内部topic的数据。

个人认识官方文档应该提及这个参数与正则订阅方式的联系,否则单从文档上很难看懂这个参数的意思。

2025-10-13 15:48:09.918 INFO 31420 --- [ main] c.t.nbu.demo.basicspringboot.DelayApp : Starting DelayApp using Java 1.8.0_462-462 on 18088363-BG with PID 31420 (D:\r\idmdemo\target\classes started by admin in D:\r\idmdemo) 2025-10-13 15:48:09.919 INFO 31420 --- [ main] c.t.nbu.demo.basicspringboot.DelayApp : No active profile set, falling back to 1 default profile: "default" 2025-10-13 15:48:10.549 INFO 31420 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) 2025-10-13 15:48:10.556 INFO 31420 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2025-10-13 15:48:10.556 INFO 31420 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.71] 2025-10-13 15:48:10.617 INFO 31420 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2025-10-13 15:48:10.617 INFO 31420 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 674 ms 2025-10-13 15:48:11.053 INFO 31420 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 1 endpoint(s) beneath base path '/actuator' 2025-10-13 15:48:11.084 INFO 31420 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' 2025-10-13 15:48:11.092 INFO 31420 --- [ main] c.t.nbu.demo.basicspringboot.DelayApp : Started DelayApp in 1.388 seconds (JVM running for 1.672) 2025-10-13 15:48:11.109 INFO 31420 --- [ main] c.t.s.e.port.kafka.KafkaEventCenter : start to register topic: delay_topic_level_2, groupId: delay 2025-10-13 15:48:11.126 INFO 31420 --- [c_level_2_delay] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [192.168.203.128:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = delay group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 200 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = -1 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 2025-10-13 15:48:11.138 INFO 31420 --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values: acks = -1 batch.size = 4096 bootstrap.servers = [192.168.203.128:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = producer-1 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] internal.auto.downgrade.txn.commit = false key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 1 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer 2025-10-13 15:48:11.157 INFO 31420 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.8.0 2025-10-13 15:48:11.157 INFO 31420 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: ebb1d6e21cc92130 2025-10-13 15:48:11.157 INFO 31420 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1760341691156 2025-10-13 15:48:11.166 INFO 31420 --- [c_level_2_delay] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.8.0 2025-10-13 15:48:11.166 INFO 31420 --- [c_level_2_delay] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: ebb1d6e21cc92130 2025-10-13 15:48:11.166 INFO 31420 --- [c_level_2_delay] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1760341691166 2025-10-13 15:48:11.166 INFO 31420 --- [c_level_2_delay] c.t.s.e.p.k.consumer.KafkaConsumerTask : start to consumer kafka topic: delay_topic_level_2 2025-10-13 15:48:11.166 INFO 31420 --- [c_level_2_delay] c.t.s.e.p.k.c.AbstractTaskService : KafkaConsumerTask is running! topic:delay_topic_level_2 2025-10-13 15:48:11.167 INFO 31420 --- [c_level_2_delay] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65, groupId=delay] Subscribed to topic(s): delay_topic_level_2 2025-10-13 15:48:11.316 INFO 31420 --- [c_level_2_delay] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65, groupId=delay] Cluster ID: Cp8MopI8TC6QJ8pHpIkP9A 2025-10-13 15:48:11.316 INFO 31420 --- [ad | producer-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-1] Cluster ID: Cp8MopI8TC6QJ8pHpIkP9A 2025-10-13 15:48:11.316 INFO 31420 --- [c_level_2_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65, groupId=delay] Discovered group coordinator admin1-virtual-machine:9092 (id: 2147483647 rack: null) 2025-10-13 15:48:11.742 INFO 31420 --- [c_level_2_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65, groupId=delay] (Re-)joining group 2025-10-13 15:48:11.755 INFO 31420 --- [c_level_2_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65, groupId=delay] (Re-)joining group 2025-10-13 15:48:11.757 INFO 31420 --- [c_level_2_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65, groupId=delay] Successfully joined group with generation Generation{generationId=41, memberId='consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65-140dc984-4134-4564-90c3-96358833fc99', protocol='cooperative-sticky'} 2025-10-13 15:48:11.758 INFO 31420 --- [c_level_2_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65, groupId=delay] Finished assignment for group at generation 41: {consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65-140dc984-4134-4564-90c3-96358833fc99=Assignment(partitions=[delay_topic_level_2-0])} 2025-10-13 15:48:11.761 INFO 31420 --- [c_level_2_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65, groupId=delay] Successfully synced group in generation Generation{generationId=41, memberId='consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65-140dc984-4134-4564-90c3-96358833fc99', protocol='cooperative-sticky'} 2025-10-13 15:48:11.761 INFO 31420 --- [c_level_2_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65, groupId=delay] Updating assignment with Assigned partitions: [delay_topic_level_2-0] Current owned partitions: [] Added partitions (assigned - owned): [delay_topic_level_2-0] Revoked partitions (owned - assigned): [] 2025-10-13 15:48:11.761 INFO 31420 --- [c_level_2_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65, groupId=delay] Notifying assignor about the new Assignment(partitions=[delay_topic_level_2-0]) 2025-10-13 15:48:11.762 INFO 31420 --- [c_level_2_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65, groupId=delay] Adding newly assigned partitions: delay_topic_level_2-0 2025-10-13 15:48:11.762 INFO 31420 --- [c_level_2_delay] com.tplink.smb.eventcenter.api.Handler : ending rebalance! 2025-10-13 15:48:11.767 INFO 31420 --- [c_level_2_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_8f1b8493-97a2-4069-80a0-80f33be30e65, groupId=delay] Setting offset for partition delay_topic_level_2-0 to the committed offset FetchPosition{offset=309, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[admin1-virtual-machine:9092 (id: 0 rack: null)], epoch=absent}} 2025-10-13 15:48:11.807 INFO 31420 --- [pool-2-thread-1] c.t.nbu.demo.basicspringboot.DelayApp : 这是测试消息class org.apache.kafka.clients.consumer.KafkaConsumer,1760341691807 2025-10-13 15:48:11.809 ERROR 31420 --- [pool-2-thread-1] c.t.smb.eventcenter.core.DataProcessor : Fail to process event, handler:com.tplink.nbu.demo.basicspringboot.DelayApp$$Lambda$858/1368128912 java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access at org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2445) ~[kafka-clients-2.8.0.jar:na] at org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2429) ~[kafka-clients-2.8.0.jar:na] at org.apache.kafka.clients.consumer.KafkaConsumer.paused(KafkaConsumer.java:2048) ~[kafka-clients-2.8.0.jar:na] at com.tplink.nbu.demo.basicspringboot.DelayApp.lambda$registerEventConsumer$0(DelayApp.java:93) ~[classes/:na] at com.tplink.smb.eventcenter.core.DataProcessor.run(DataProcessor.java:31) ~[eventcenter.core-1.4.5002-test-SNAPSHOT.jar:1.4.5002-test-SNAPSHOT] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_462-462] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_462-462] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_462-462] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_462-462] at java.lang.Thread.run(Thread.java:750) [na:1.8.0_462-462] 2025-10-13 15:48:11.810 INFO 31420 --- [pool-2-thread-1] c.t.nbu.demo.basicspringboot.DelayApp : 这是测试消息class org.apache.kafka.clients.consumer.KafkaConsumer,1760341691810 2025-10-13 15:48:11.810 ERROR 31420 --- [pool-2-thread-1] c.t.smb.eventcenter.core.DataProcessor : Fail to process event, handler:com.tplink.nbu.demo.basicspringboot.DelayApp$$Lambda$858/1368128912 java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access at org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2445) ~[kafka-clients-2.8.0.jar:na] at org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2429) ~[kafka-clients-2.8.0.jar:na] at org.apache.kafka.clients.consumer.KafkaConsumer.paused(KafkaConsumer.java:2048) ~[kafka-clients-2.8.0.jar:na] at com.tplink.nbu.demo.basicspringboot.DelayApp.lambda$registerEventConsumer$0(DelayApp.java:93) ~[classes/:na] at com.tplink.smb.eventcenter.core.DataProcessor.run(DataProcessor.java:31) ~[eventcenter.core-1.4.5002-test-SNAPSHOT.jar:1.4.5002-test-SNAPSHOT] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_462-462] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_462-462] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_462-462] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_462-462] at java.lang.Thread.run(Thread.java:750) [na:1.8.0_462-462]@Slf4j @SpringBootApplication(exclude = {DataSourceAutoConfiguration.class, DataSourceTransactionManagerAutoConfiguration.class}) @ComponentScan(basePackages = { "com.tplink.nbu.demo.basicspringboot", "com.tplink.smb.eventcenter.port.kafka.deadletter", "com.tplink.smb.eventcenter.api.config" }) public class DelayApp implements CommandLineRunner { @Autowired private KafkaEventCenter eventCenter; @Autowired private DLQConfig deadLetterConfig; // 定义三个主Topic及对应的死信Topic private static final String EVENT_TOPIC = "delay_topic_level_2"; public static void main(String[] args) { SpringApplication.run(DelayApp.class, args); } @Override public void run(String... args) throws Exception { registerEventConsumer(); eventCenter.send("delay_topic_level_2", 0, new Event("key1", "yanchi")); // eventCenter.sendDelay("delay", "key1", 0, new Event("key1", "yanchi"), 2000, new ForceMatchStrategy(), // new EventFuture() { // @Override // public void onSuccess(EventCenterSendResult eventCenterSendResult) { // // } // // @Override // public void onFailure(Throwable throwable) { // // } // }); // registerEventConsumer(); } /** * 注册Event类型的消费者(主Topic: EVENT_TOPIC) */ private void registerEventConsumer() { EventHandler eventHandler = event -> { KafkaConsumer<?,?> delayConsumer = eventCenter.getDelayConsumerByTopic(EVENT_TOPIC); log.info("这是测试消息{},{}",delayConsumer.getClass(),System.currentTimeMillis()); delayConsumer.paused(); log.info("这是测试消息{},{}",delayConsumer,System.currentTimeMillis()); }; eventCenter.registerUnicast( EVENT_TOPIC, "delay", eventHandler, Executors.newSingleThreadExecutor() ); } }请分析并解决
10-14
@Slf4j @SpringBootApplication(exclude = {DataSourceAutoConfiguration.class, DataSourceTransactionManagerAutoConfiguration.class}) @ComponentScan(basePackages = { "com.tplink.nbu.demo.basicspringboot", "com.tplink.smb.eventcenter.port.kafka.deadletter", "com.tplink.smb.eventcenter.port.kafka.delay", "com.tplink.smb.eventcenter.api.config" }) public class DelayConsume implements CommandLineRunner { @Autowired private KafkaEventCenter eventCenter; @Autowired private DLQConfig deadLetterConfig; private static final String EVENT_TOPIC = "delay_topic_level_3"; public static void main(String[] args) { SpringApplication.run(DelayConsume.class, args); } @Override public void run(String... args) throws Exception { registerEventConsumer(); registerDelayConsumer(); Thread.sleep(1000); eventCenter.sendDelay("vms_dlq_hello-topic", "key1", 0, new Event("key1", "延迟触发消息"), 3000, new ForceMatchStrategy()); } private void registerDelayConsumer() { eventCenter.registerDelayConsumer(EVENT_TOPIC, ForkJoinPool.commonPool()); } private void registerEventConsumer() { EventHandler eventHandler = event -> { // 打印消息内容和消费时间戳,明确消费时机 log.info("[消费时间:{}] 触发消费,消息内容:{}", LocalDateTime.now().format(DateTimeFormatter.ISO_LOCAL_DATE_TIME), event.getMessage()); }; eventCenter.registerUnicast( "vms_dlq_hello-topic", "delay", eventHandler, ForkJoinPool.commonPool() ); } }2025-10-14 19:53:50.545 INFO 21876 --- [ main] c.t.n.demo.basicspringboot.DelayConsume : Starting DelayConsume using Java 1.8.0_462-462 on 18088363-BG with PID 21876 (D:\r\idmdemo\target\classes started by admin in D:\r\idmdemo) 2025-10-14 19:53:50.546 INFO 21876 --- [ main] c.t.n.demo.basicspringboot.DelayConsume : No active profile set, falling back to 1 default profile: "default" 2025-10-14 19:53:51.131 INFO 21876 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) 2025-10-14 19:53:51.137 INFO 21876 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2025-10-14 19:53:51.138 INFO 21876 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.71] 2025-10-14 19:53:51.206 INFO 21876 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2025-10-14 19:53:51.206 INFO 21876 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 636 ms 2025-10-14 19:53:51.668 INFO 21876 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 1 endpoint(s) beneath base path '/actuator' 2025-10-14 19:53:51.699 INFO 21876 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' 2025-10-14 19:53:51.708 INFO 21876 --- [ main] c.t.n.demo.basicspringboot.DelayConsume : Started DelayConsume in 1.398 seconds (JVM running for 1.714) 2025-10-14 19:53:51.725 INFO 21876 --- [ main] c.t.s.e.port.kafka.KafkaEventCenter : start to register topic: vms_dlq_hello-topic, groupId: delay 2025-10-14 19:53:51.730 INFO 21876 --- [ main] c.t.s.e.port.kafka.KafkaEventCenter : start to register topic: delay_topic_level_3, groupId: delay_consumer_group 2025-10-14 19:53:51.741 INFO 21876 --- [llo-topic_delay] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [192.168.203.128:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = delay group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 200 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = -1 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 2025-10-14 19:53:51.741 INFO 21876 --- [_consumer_group] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [192.168.203.128:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = delay_consumer_group group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 200 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = -1 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 2025-10-14 19:53:51.785 INFO 21876 --- [llo-topic_delay] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.8.0 2025-10-14 19:53:51.786 INFO 21876 --- [llo-topic_delay] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: ebb1d6e21cc92130 2025-10-14 19:53:51.786 INFO 21876 --- [llo-topic_delay] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1760442831785 2025-10-14 19:53:51.786 INFO 21876 --- [llo-topic_delay] c.t.s.e.p.k.consumer.KafkaConsumerTask : start to consumer kafka topic: vms_dlq_hello-topic 2025-10-14 19:53:51.786 INFO 21876 --- [_consumer_group] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.8.0 2025-10-14 19:53:51.786 INFO 21876 --- [llo-topic_delay] c.t.s.e.p.k.c.AbstractTaskService : KafkaConsumerTask is running! topic:vms_dlq_hello-topic 2025-10-14 19:53:51.786 INFO 21876 --- [_consumer_group] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: ebb1d6e21cc92130 2025-10-14 19:53:51.786 INFO 21876 --- [_consumer_group] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1760442831785 2025-10-14 19:53:51.787 INFO 21876 --- [_consumer_group] c.t.s.e.p.k.c.GenericKafkaConsumerTask : start to consumer kafka topic: delay_topic_level_3 2025-10-14 19:53:51.787 INFO 21876 --- [_consumer_group] c.t.s.e.p.k.c.AbstractTaskService : KafkaConsumerTask is running! topic:delay_topic_level_3 2025-10-14 19:53:51.787 INFO 21876 --- [_consumer_group] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a, groupId=delay_consumer_group] Subscribed to topic(s): delay_topic_level_3 2025-10-14 19:53:51.787 INFO 21876 --- [llo-topic_delay] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517, groupId=delay] Subscribed to topic(s): vms_dlq_hello-topic 2025-10-14 19:53:51.941 INFO 21876 --- [llo-topic_delay] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517, groupId=delay] Cluster ID: Cp8MopI8TC6QJ8pHpIkP9A 2025-10-14 19:53:51.941 INFO 21876 --- [_consumer_group] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a, groupId=delay_consumer_group] Cluster ID: Cp8MopI8TC6QJ8pHpIkP9A 2025-10-14 19:53:51.942 INFO 21876 --- [_consumer_group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a, groupId=delay_consumer_group] Discovered group coordinator admin1-virtual-machine:9092 (id: 2147483647 rack: null) 2025-10-14 19:53:51.942 INFO 21876 --- [llo-topic_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517, groupId=delay] Discovered group coordinator admin1-virtual-machine:9092 (id: 2147483647 rack: null) 2025-10-14 19:53:52.376 INFO 21876 --- [_consumer_group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a, groupId=delay_consumer_group] (Re-)joining group 2025-10-14 19:53:52.376 INFO 21876 --- [llo-topic_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517, groupId=delay] (Re-)joining group 2025-10-14 19:53:52.395 INFO 21876 --- [llo-topic_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517, groupId=delay] (Re-)joining group 2025-10-14 19:53:52.395 INFO 21876 --- [_consumer_group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a, groupId=delay_consumer_group] (Re-)joining group 2025-10-14 19:53:52.397 INFO 21876 --- [llo-topic_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517, groupId=delay] Successfully joined group with generation Generation{generationId=221, memberId='consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517-8a50f2fb-ca16-4fbf-8a26-a6c6c1cf61a5', protocol='cooperative-sticky'} 2025-10-14 19:53:52.397 INFO 21876 --- [_consumer_group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a, groupId=delay_consumer_group] Successfully joined group with generation Generation{generationId=125, memberId='consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a-dff6b5c0-029c-4b4b-96c1-9407f090d632', protocol='cooperative-sticky'} 2025-10-14 19:53:52.398 INFO 21876 --- [_consumer_group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a, groupId=delay_consumer_group] Finished assignment for group at generation 125: {consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a-dff6b5c0-029c-4b4b-96c1-9407f090d632=Assignment(partitions=[delay_topic_level_3-0])} 2025-10-14 19:53:52.398 INFO 21876 --- [llo-topic_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517, groupId=delay] Finished assignment for group at generation 221: {consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517-8a50f2fb-ca16-4fbf-8a26-a6c6c1cf61a5=Assignment(partitions=[vms_dlq_hello-topic-0])} 2025-10-14 19:53:52.400 INFO 21876 --- [_consumer_group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a, groupId=delay_consumer_group] Successfully synced group in generation Generation{generationId=125, memberId='consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a-dff6b5c0-029c-4b4b-96c1-9407f090d632', protocol='cooperative-sticky'} 2025-10-14 19:53:52.400 INFO 21876 --- [_consumer_group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a, groupId=delay_consumer_group] Updating assignment with Assigned partitions: [delay_topic_level_3-0] Current owned partitions: [] Added partitions (assigned - owned): [delay_topic_level_3-0] Revoked partitions (owned - assigned): [] 2025-10-14 19:53:52.400 INFO 21876 --- [llo-topic_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517, groupId=delay] Successfully synced group in generation Generation{generationId=221, memberId='consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517-8a50f2fb-ca16-4fbf-8a26-a6c6c1cf61a5', protocol='cooperative-sticky'} 2025-10-14 19:53:52.400 INFO 21876 --- [_consumer_group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a, groupId=delay_consumer_group] Notifying assignor about the new Assignment(partitions=[delay_topic_level_3-0]) 2025-10-14 19:53:52.401 INFO 21876 --- [llo-topic_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517, groupId=delay] Updating assignment with Assigned partitions: [vms_dlq_hello-topic-0] Current owned partitions: [] Added partitions (assigned - owned): [vms_dlq_hello-topic-0] Revoked partitions (owned - assigned): [] 2025-10-14 19:53:52.401 INFO 21876 --- [llo-topic_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517, groupId=delay] Notifying assignor about the new Assignment(partitions=[vms_dlq_hello-topic-0]) 2025-10-14 19:53:52.401 INFO 21876 --- [llo-topic_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517, groupId=delay] Adding newly assigned partitions: vms_dlq_hello-topic-0 2025-10-14 19:53:52.401 INFO 21876 --- [_consumer_group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a, groupId=delay_consumer_group] Adding newly assigned partitions: delay_topic_level_3-0 2025-10-14 19:53:52.402 INFO 21876 --- [llo-topic_delay] com.tplink.smb.eventcenter.api.Handler : ending rebalance! 2025-10-14 19:53:52.402 INFO 21876 --- [_consumer_group] com.tplink.smb.eventcenter.api.Handler : ending rebalance! 2025-10-14 19:53:52.407 INFO 21876 --- [_consumer_group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_a6fa772a-bd51-4a06-8413-61f7088ff51a, groupId=delay_consumer_group] Setting offset for partition delay_topic_level_3-0 to the committed offset FetchPosition{offset=35, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[admin1-virtual-machine:9092 (id: 0 rack: null)], epoch=absent}} 2025-10-14 19:53:52.407 INFO 21876 --- [llo-topic_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_1755e43d-cdc4-4a2f-8bd7-2783f4abf517, groupId=delay] Setting offset for partition vms_dlq_hello-topic-0 to the committed offset FetchPosition{offset=2, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[admin1-virtual-machine:9092 (id: 0 rack: null)], epoch=absent}} 2025-10-14 19:53:52.740 INFO 21876 --- [ main] c.t.s.e.port.kafka.KafkaEventCenter : level=3, expirationTime=1760442835739 2025-10-14 19:53:52.767 INFO 21876 --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values: acks = -1 batch.size = 4096 bootstrap.servers = [192.168.203.128:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = producer-1 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] internal.auto.downgrade.txn.commit = false key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 1 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer 2025-10-14 19:53:52.778 INFO 21876 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.8.0 2025-10-14 19:53:52.778 INFO 21876 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: ebb1d6e21cc92130 2025-10-14 19:53:52.778 INFO 21876 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1760442832778 2025-10-14 19:53:52.800 INFO 21876 --- [ad | producer-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-1] Cluster ID: Cp8MopI8TC6QJ8pHpIkP9A 2025-10-14 19:53:52.841 INFO 21876 --- [_consumer_group] c.t.s.e.p.k.c.GenericKafkaConsumerTask : 实际暂停成功,topic: delay_topic_level_3 2025-10-14 19:53:52.841 INFO 21876 --- [nPool-worker-18] c.t.s.e.port.kafka.KafkaEventCenter : 暂停成功,topic: delay_topic_level_3 2025-10-14 19:53:52.841 INFO 21876 --- [nPool-worker-18] c.t.s.e.port.kafka.delay.DelayHandler : 暂停目标topic:delay_topic_level_3 2025-10-14 19:53:55.749 INFO 21876 --- [pool-4-thread-1] c.t.s.e.port.kafka.delay.DelayHandler : 延迟任务执行:targetTopic=delay_topic_level_3, currentTime=1760442835749 2025-10-14 19:53:55.873 INFO 21876 --- [_consumer_group] c.t.s.e.p.k.c.GenericKafkaConsumerTask : 实际恢复成功,topic: delay_topic_level_3 2025-10-14 19:53:55.873 INFO 21876 --- [pool-4-thread-1] c.t.s.e.port.kafka.KafkaEventCenter : 恢复成功,topic: delay_topic_level_3 2025-10-14 19:53:55.874 INFO 21876 --- [pool-4-thread-1] c.t.s.e.port.kafka.delay.DelayHandler : 恢复目标topic消费:delay_topic_level_3 2025-10-14 19:53:55.874 INFO 21876 --- [pool-4-thread-1] c.t.s.e.port.kafka.KafkaEventCenter : 准备转发延迟消息: 2025-10-14 19:53:55.885 INFO 21876 --- [pool-4-thread-1] c.t.s.e.port.kafka.KafkaEventCenter : 延迟事件已转发,targetTopic: vms_dlq_hello-topic 2025-10-14 19:53:55.890 INFO 21876 --- [nPool-worker-18] c.t.n.demo.basicspringboot.DelayConsume : [消费时间:2025-10-14T19:53:55.889] 触发消费,消息内容:延迟触发消息;请分析该日志,
最新发布
10-15
2025-10-14 14:27:05.536 INFO 32308 — [ main] c.t.n.demo.basicspringboot.DelayConsume : Starting DelayConsume using Java 1.8.0_462-462 on 18088363-BG with PID 32308 (D:\r\idmdemo\target\classes started by admin in D:\r\idmdemo) 2025-10-14 14:27:05.537 INFO 32308 — [ main] c.t.n.demo.basicspringboot.DelayConsume : No active profile set, falling back to 1 default profile: “default” 2025-10-14 14:27:06.129 INFO 32308 — [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) 2025-10-14 14:27:06.135 INFO 32308 — [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2025-10-14 14:27:06.135 INFO 32308 — [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.71] 2025-10-14 14:27:06.195 INFO 32308 — [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2025-10-14 14:27:06.195 INFO 32308 — [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 635 ms 2025-10-14 14:27:06.625 INFO 32308 — [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 1 endpoint(s) beneath base path ‘/actuator’ 2025-10-14 14:27:06.649 INFO 32308 — [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ‘’ 2025-10-14 14:27:06.656 INFO 32308 — [ main] c.t.n.demo.basicspringboot.DelayConsume : Started DelayConsume in 1.325 seconds (JVM running for 1.661) 2025-10-14 14:27:06.671 INFO 32308 — [ main] c.t.s.e.port.kafka.KafkaEventCenter : 注册延迟消费者 2025-10-14 14:27:06.672 INFO 32308 — [ main] c.t.s.e.port.kafka.KafkaEventCenter : start to register topic: delay_topic_level_2, groupId: delay_consumer_group 2025-10-14 14:27:06.677 INFO 32308 — [ main] c.t.s.e.port.kafka.KafkaEventCenter : 注册延迟消费者成功 2025-10-14 14:27:06.677 INFO 32308 — [ main] c.t.s.e.port.kafka.KafkaEventCenter : start to register topic: vms_dlq_hello-topic, groupId: delay 2025-10-14 14:27:06.678 INFO 32308 — [ main] c.t.s.e.port.kafka.KafkaEventCenter : DelayMs=2000, level=2, expirationTime=1760423228678 2025-10-14 14:27:06.688 INFO 32308 — [llo-topic_delay] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [192.168.203.128:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = delay group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 200 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = -1 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 2025-10-14 14:27:06.688 INFO 32308 — [_consumer_group] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [192.168.203.128:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = delay_consumer_group group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 200 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = -1 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 2025-10-14 14:27:06.701 INFO 32308 — [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values: acks = -1 batch.size = 4096 bootstrap.servers = [192.168.203.128:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = producer-1 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] internal.auto.downgrade.txn.commit = false key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 1 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer 2025-10-14 14:27:06.723 INFO 32308 — [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.8.0 2025-10-14 14:27:06.723 INFO 32308 — [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: ebb1d6e21cc92130 2025-10-14 14:27:06.723 INFO 32308 — [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1760423226722 2025-10-14 14:27:06.732 INFO 32308 — [llo-topic_delay] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.8.0 2025-10-14 14:27:06.732 INFO 32308 — [llo-topic_delay] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: ebb1d6e21cc92130 2025-10-14 14:27:06.732 INFO 32308 — [llo-topic_delay] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1760423226732 2025-10-14 14:27:06.733 INFO 32308 — [llo-topic_delay] c.t.s.e.p.k.consumer.KafkaConsumerTask : start to consumer kafka topic: vms_dlq_hello-topic 2025-10-14 14:27:06.733 INFO 32308 — [_consumer_group] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.8.0 2025-10-14 14:27:06.733 INFO 32308 — [_consumer_group] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: ebb1d6e21cc92130 2025-10-14 14:27:06.733 INFO 32308 — [_consumer_group] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1760423226732 2025-10-14 14:27:06.733 INFO 32308 — [llo-topic_delay] c.t.s.e.p.k.c.AbstractTaskService : KafkaConsumerTask is running! topic:vms_dlq_hello-topic 2025-10-14 14:27:06.734 INFO 32308 — [_consumer_group] c.t.s.e.p.k.c.GenericKafkaConsumerTask : start to consumer kafka topic: delay_topic_level_2 2025-10-14 14:27:06.734 INFO 32308 — [_consumer_group] c.t.s.e.p.k.c.AbstractTaskService : KafkaConsumerTask is running! topic:delay_topic_level_2 2025-10-14 14:27:06.734 INFO 32308 — [_consumer_group] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e, groupId=delay_consumer_group] Subscribed to topic(s): delay_topic_level_2 2025-10-14 14:27:06.734 INFO 32308 — [llo-topic_delay] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38, groupId=delay] Subscribed to topic(s): vms_dlq_hello-topic 2025-10-14 14:27:06.889 INFO 32308 — [_consumer_group] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e, groupId=delay_consumer_group] Cluster ID: Cp8MopI8TC6QJ8pHpIkP9A 2025-10-14 14:27:06.889 INFO 32308 — [llo-topic_delay] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38, groupId=delay] Cluster ID: Cp8MopI8TC6QJ8pHpIkP9A 2025-10-14 14:27:06.889 INFO 32308 — [ad | producer-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-1] Cluster ID: Cp8MopI8TC6QJ8pHpIkP9A 2025-10-14 14:27:06.890 INFO 32308 — [_consumer_group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e, groupId=delay_consumer_group] Discovered group coordinator admin1-virtual-machine:9092 (id: 2147483647 rack: null) 2025-10-14 14:27:06.890 INFO 32308 — [llo-topic_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38, groupId=delay] Discovered group coordinator admin1-virtual-machine:9092 (id: 2147483647 rack: null) 2025-10-14 14:27:07.319 INFO 32308 — [_consumer_group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e, groupId=delay_consumer_group] (Re-)joining group 2025-10-14 14:27:07.319 INFO 32308 — [llo-topic_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38, groupId=delay] (Re-)joining group 2025-10-14 14:27:07.340 INFO 32308 — [_consumer_group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e, groupId=delay_consumer_group] (Re-)joining group 2025-10-14 14:27:07.340 INFO 32308 — [llo-topic_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38, groupId=delay] (Re-)joining group 2025-10-14 14:27:07.342 INFO 32308 — [_consumer_group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e, groupId=delay_consumer_group] Successfully joined group with generation Generation{generationId=57, memberId=‘consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e-f9f25a29-db98-4de4-882c-f19cfd2b2579’, protocol=‘cooperative-sticky’} 2025-10-14 14:27:07.342 INFO 32308 — [llo-topic_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38, groupId=delay] Successfully joined group with generation Generation{generationId=135, memberId=‘consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38-d305c0d3-8886-4596-bfdc-81ae28db78a1’, protocol=‘cooperative-sticky’} 2025-10-14 14:27:07.343 INFO 32308 — [_consumer_group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e, groupId=delay_consumer_group] Finished assignment for group at generation 57: {consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e-f9f25a29-db98-4de4-882c-f19cfd2b2579=Assignment(partitions=[delay_topic_level_2-0])} 2025-10-14 14:27:07.343 INFO 32308 — [llo-topic_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38, groupId=delay] Finished assignment for group at generation 135: {consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38-d305c0d3-8886-4596-bfdc-81ae28db78a1=Assignment(partitions=[vms_dlq_hello-topic-0])} 2025-10-14 14:27:07.345 INFO 32308 — [_consumer_group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e, groupId=delay_consumer_group] Successfully synced group in generation Generation{generationId=57, memberId=‘consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e-f9f25a29-db98-4de4-882c-f19cfd2b2579’, protocol=‘cooperative-sticky’} 2025-10-14 14:27:07.345 INFO 32308 — [_consumer_group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e, groupId=delay_consumer_group] Updating assignment with Assigned partitions: [delay_topic_level_2-0] Current owned partitions: [] Added partitions (assigned - owned): [delay_topic_level_2-0] Revoked partitions (owned - assigned): [] 2025-10-14 14:27:07.345 INFO 32308 — [_consumer_group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e, groupId=delay_consumer_group] Notifying assignor about the new Assignment(partitions=[delay_topic_level_2-0]) 2025-10-14 14:27:07.346 INFO 32308 — [_consumer_group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e, groupId=delay_consumer_group] Adding newly assigned partitions: delay_topic_level_2-0 2025-10-14 14:27:07.346 INFO 32308 — [_consumer_group] com.tplink.smb.eventcenter.api.Handler : ending rebalance! 2025-10-14 14:27:07.347 INFO 32308 — [llo-topic_delay] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38, groupId=delay] Successfully synced group in generation Generation{generationId=135, memberId=‘consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38-d305c0d3-8886-4596-bfdc-81ae28db78a1’, protocol=‘cooperative-sticky’} 2025-10-14 14:27:07.347 INFO 32308 — [llo-topic_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38, groupId=delay] Updating assignment with Assigned partitions: [vms_dlq_hello-topic-0] Current owned partitions: [] Added partitions (assigned - owned): [vms_dlq_hello-topic-0] Revoked partitions (owned - assigned): [] 2025-10-14 14:27:07.347 INFO 32308 — [llo-topic_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38, groupId=delay] Notifying assignor about the new Assignment(partitions=[vms_dlq_hello-topic-0]) 2025-10-14 14:27:07.347 INFO 32308 — [llo-topic_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38, groupId=delay] Adding newly assigned partitions: vms_dlq_hello-topic-0 2025-10-14 14:27:07.347 INFO 32308 — [llo-topic_delay] com.tplink.smb.eventcenter.api.Handler : ending rebalance! 2025-10-14 14:27:07.351 INFO 32308 — [llo-topic_delay] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_7815f9f9-5b41-47f5-9be8-6aa2b46e0d38, groupId=delay] Setting offset for partition vms_dlq_hello-topic-0 to the committed offset FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[admin1-virtual-machine:9092 (id: 0 rack: null)], epoch=absent}} 2025-10-14 14:27:07.351 INFO 32308 — [_consumer_group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_eb89af4b-074d-4f77-86e7-c603baae8c7e, groupId=delay_consumer_group] Setting offset for partition delay_topic_level_2-0 to the committed offset FetchPosition{offset=377, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[admin1-virtual-machine:9092 (id: 0 rack: null)], epoch=absent}}@Slf4j @SpringBootApplication(exclude = {DataSourceAutoConfiguration.class, DataSourceTransactionManagerAutoConfiguration.class}) @ComponentScan(basePackages = { “com.tplink.nbu.demo.basicspringboot”, “com.tplink.smb.eventcenter.port.kafka.deadletter”, “com.tplink.smb.eventcenter.api.config” }) public class DelayConsume implements CommandLineRunner { @Autowired private KafkaEventCenter eventCenter; @Autowired private DLQConfig deadLetterConfig; private static final String EVENT_TOPIC = "delay_topic_level_2"; public static void main(String[] args) { SpringApplication.run(DelayConsume.class, args); } @Override public void run(String... args) throws Exception { registerDelayConsumer(); registerEventConsumer(); eventCenter.sendDelay("vms_dlq_hello-topic", "key1", 0, new Event("key1", "延迟触发消息"), 2000, new ForceMatchStrategy()); } private void registerDelayConsumer() { eventCenter.registerDelayConsumer(EVENT_TOPIC, Executors.newSingleThreadExecutor()); } private void registerEventConsumer() { EventHandler eventHandler = event -> { // 打印消息内容和消费时间戳,明确消费时机 log.info("[消费时间:{}] 触发消费,消息内容:{}", LocalDateTime.now().format(DateTimeFormatter.ISO_LOCAL_DATE_TIME), event.getMessage()); }; eventCenter.registerUnicast( "vms_dlq_hello-topic", "delay", eventHandler, ForkJoinPool.commonPool() ); } }/** * 将原始事件转发到目标Topic */ public void forwardToTargetTopic(DelayEvent delayEvent) { try { log.info("准备转发延迟消息: "); send( delayEvent.getTopic(), delayEvent.getPartition(), delayEvent.getEvent() ); log.info("延迟事件已转发,targetTopic: {}", delayEvent.getTopic()); } catch (Exception e) { log.error("转发延迟事件失败,targetTopic: {}", delayEvent.getTopic(), e); } } /** * 注册延迟消费者(使用已有调度器) * @param delayTopic 延迟主题(如"delay_topic_level_2") * @param executorService 消费者任务执行线程池 */ public void registerDelayConsumer(String delayTopic, ExecutorService executorService) { String delayGroupId = "delay_consumer_group"; // 构造延迟处理器时,注入已有scheduledExecutorService log.info("注册延迟消费者"); DelayHandler delayHandler = new DelayHandler(this,Executors.newScheduledThreadPool(5, r -> { Thread t = new Thread(r, "Forwarding-Worker"); t.setDaemon(true); return t;})); registerUnicastGenerically( delayTopic, // 监听的延迟主题 delayGroupId, // 延迟消费者固定组ID delayHandler, executorService, DelayEvent.class// 使用延迟处理器 ); log.info("注册延迟消费者成功"); }@Slf4j @RequiredArgsConstructor public class DelayHandler implements GenericEventHandler { private final KafkaEventCenter eventCenter; private final ScheduledExecutorService delayScheduler; private final Map<String, Boolean> pausedTopics = new ConcurrentHashMap<>(); @Override public void handleEvent(DelayEvent delayEvent) { log.info("进入handleEvent"); long currentTime = System.currentTimeMillis(); long expirationTime = delayEvent.getExpirationTime(); String targetTopic = delayEvent.getTopic(); // 如果已过期直接转发 if (expirationTime <= currentTime) { eventCenter.forwardToTargetTopic(delayEvent); return; } else { // 未过期则暂停目标Topic消费 pauseTargetTopic(targetTopic); // 计算需要等待的时间 long delayMs = expirationTime - currentTime; // 注册延迟任务:到期后恢复Topic并转发事件 delayScheduler.schedule(() -> { log.info("触发延迟任务,目标主题={}, 当前时间={}", targetTopic, System.currentTimeMillis()); resumeTargetTopic(targetTopic); eventCenter.forwardToTargetTopic(delayEvent); }, delayMs, TimeUnit.MILLISECONDS); } } /** * 暂停目标Topic的消费者 */ private void pauseTargetTopic(String targetTopic) { if (pausedTopics.putIfAbsent(targetTopic, true) == null) { try { eventCenter.pauseTopic(targetTopic); log.info("已暂停延迟Topic消费,targetTopic: {}", targetTopic); } catch (Exception e) { log.error("暂停Topic失败,targetTopic: {}", targetTopic, e); pausedTopics.remove(targetTopic); } } } /** * 恢复目标Topic的消费者 */ private void resumeTargetTopic(String targetTopic) { if (pausedTopics.remove(targetTopic) != null) { try { eventCenter.resumeTopic(targetTopic); log.info("已恢复延迟Topic消费,targetTopic: {}", targetTopic); } catch (Exception e) { log.error("恢复Topic失败,targetTopic: {}", targetTopic, e); } } } } 请继续分析,为何delayhandler未正常工作
10-15
2025-09-23 16:05:59.808 INFO 23424 --- [_dlq-demo-group] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [192.168.203.128:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = dlq-demo-group group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 200 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = -1 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 2025-09-23 16:05:59.808 INFO 23424 --- [opic_demo-group] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [192.168.203.128:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = demo-group group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 200 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = -1 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 2025-09-23 16:05:59.820 INFO 23424 --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values: acks = -1 batch.size = 4096 bootstrap.servers = [192.168.203.128:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = producer-1 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] internal.auto.downgrade.txn.commit = false key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 1 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer 2025-09-23 16:05:59.842 INFO 23424 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.8.0 2025-09-23 16:05:59.842 INFO 23424 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: ebb1d6e21cc92130 2025-09-23 16:05:59.842 INFO 23424 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1758614759841 2025-09-23 16:05:59.852 INFO 23424 --- [opic_demo-group] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.8.0 2025-09-23 16:05:59.853 INFO 23424 --- [opic_demo-group] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: ebb1d6e21cc92130 2025-09-23 16:05:59.853 INFO 23424 --- [opic_demo-group] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1758614759852 2025-09-23 16:05:59.853 INFO 23424 --- [opic_demo-group] c.t.s.e.p.k.consumer.KafkaConsumerTask : start to consumer kafka topic: hello-topic 2025-09-23 16:05:59.853 INFO 23424 --- [opic_demo-group] c.t.s.e.p.k.c.AbstractTaskService : KafkaConsumerTask is running! topic:hello-topic 2025-09-23 16:05:59.853 INFO 23424 --- [_dlq-demo-group] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.8.0 2025-09-23 16:05:59.853 INFO 23424 --- [_dlq-demo-group] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: ebb1d6e21cc92130 2025-09-23 16:05:59.853 INFO 23424 --- [_dlq-demo-group] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1758614759852 2025-09-23 16:05:59.853 INFO 23424 --- [_dlq-demo-group] c.t.s.e.p.k.consumer.KafkaConsumerTask : start to consumer kafka topic: vms_dlq_hello-topic 2025-09-23 16:05:59.854 INFO 23424 --- [_dlq-demo-group] c.t.s.e.p.k.c.AbstractTaskService : KafkaConsumerTask is running! topic:vms_dlq_hello-topic 2025-09-23 16:05:59.854 INFO 23424 --- [_dlq-demo-group] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec, groupId=dlq-demo-group] Subscribed to topic(s): vms_dlq_hello-topic 2025-09-23 16:05:59.854 INFO 23424 --- [opic_demo-group] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417, groupId=demo-group] Subscribed to topic(s): hello-topic 2025-09-23 16:06:00.010 INFO 23424 --- [opic_demo-group] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417, groupId=demo-group] Cluster ID: YjO1tbcTTlK2KxmL_Vu2yw 2025-09-23 16:06:00.010 INFO 23424 --- [_dlq-demo-group] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec, groupId=dlq-demo-group] Cluster ID: YjO1tbcTTlK2KxmL_Vu2yw 2025-09-23 16:06:00.010 INFO 23424 --- [ad | producer-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-1] Cluster ID: YjO1tbcTTlK2KxmL_Vu2yw 2025-09-23 16:06:00.011 INFO 23424 --- [_dlq-demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec, groupId=dlq-demo-group] Discovered group coordinator admin1-virtual-machine:9092 (id: 2147483647 rack: null) 2025-09-23 16:06:00.011 INFO 23424 --- [opic_demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417, groupId=demo-group] Discovered group coordinator admin1-virtual-machine:9092 (id: 2147483647 rack: null) 2025-09-23 16:06:00.435 INFO 23424 --- [opic_demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417, groupId=demo-group] (Re-)joining group 2025-09-23 16:06:00.435 INFO 23424 --- [_dlq-demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec, groupId=dlq-demo-group] (Re-)joining group 2025-09-23 16:06:00.461 INFO 23424 --- [_dlq-demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec, groupId=dlq-demo-group] (Re-)joining group 2025-09-23 16:06:00.461 INFO 23424 --- [opic_demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417, groupId=demo-group] (Re-)joining group 2025-09-23 16:06:00.463 INFO 23424 --- [opic_demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417, groupId=demo-group] Successfully joined group with generation Generation{generationId=25, memberId='consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417-a34c017a-69ac-4f10-9be9-fd96e739d425', protocol='cooperative-sticky'} 2025-09-23 16:06:00.463 INFO 23424 --- [_dlq-demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec, groupId=dlq-demo-group] Successfully joined group with generation Generation{generationId=9, memberId='consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec-e9fcfebe-c488-45bd-a1bc-9d76f575a60d', protocol='cooperative-sticky'} 2025-09-23 16:06:00.465 INFO 23424 --- [_dlq-demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec, groupId=dlq-demo-group] Finished assignment for group at generation 9: {consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec-e9fcfebe-c488-45bd-a1bc-9d76f575a60d=Assignment(partitions=[vms_dlq_hello-topic-0])} 2025-09-23 16:06:00.465 INFO 23424 --- [opic_demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417, groupId=demo-group] Finished assignment for group at generation 25: {consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417-a34c017a-69ac-4f10-9be9-fd96e739d425=Assignment(partitions=[hello-topic-0])} 2025-09-23 16:06:00.468 INFO 23424 --- [opic_demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417, groupId=demo-group] Successfully synced group in generation Generation{generationId=25, memberId='consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417-a34c017a-69ac-4f10-9be9-fd96e739d425', protocol='cooperative-sticky'} 2025-09-23 16:06:00.468 INFO 23424 --- [_dlq-demo-group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec, groupId=dlq-demo-group] Successfully synced group in generation Generation{generationId=9, memberId='consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec-e9fcfebe-c488-45bd-a1bc-9d76f575a60d', protocol='cooperative-sticky'} 2025-09-23 16:06:00.468 INFO 23424 --- [opic_demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417, groupId=demo-group] Updating assignment with Assigned partitions: [hello-topic-0] Current owned partitions: [] Added partitions (assigned - owned): [hello-topic-0] Revoked partitions (owned - assigned): [] 2025-09-23 16:06:00.468 INFO 23424 --- [_dlq-demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec, groupId=dlq-demo-group] Updating assignment with Assigned partitions: [vms_dlq_hello-topic-0] Current owned partitions: [] Added partitions (assigned - owned): [vms_dlq_hello-topic-0] Revoked partitions (owned - assigned): [] 2025-09-23 16:06:00.468 INFO 23424 --- [opic_demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417, groupId=demo-group] Notifying assignor about the new Assignment(partitions=[hello-topic-0]) 2025-09-23 16:06:00.468 INFO 23424 --- [_dlq-demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec, groupId=dlq-demo-group] Notifying assignor about the new Assignment(partitions=[vms_dlq_hello-topic-0]) 2025-09-23 16:06:00.469 INFO 23424 --- [opic_demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417, groupId=demo-group] Adding newly assigned partitions: hello-topic-0 2025-09-23 16:06:00.469 INFO 23424 --- [_dlq-demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec, groupId=dlq-demo-group] Adding newly assigned partitions: vms_dlq_hello-topic-0 2025-09-23 16:06:00.469 INFO 23424 --- [opic_demo-group] com.tplink.smb.eventcenter.api.Handler : ending rebalance! 2025-09-23 16:06:00.469 INFO 23424 --- [_dlq-demo-group] com.tplink.smb.eventcenter.api.Handler : ending rebalance! 2025-09-23 16:06:00.475 INFO 23424 --- [_dlq-demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_8d88befa-e360-4c3e-8eb0-7374669ca6ec, groupId=dlq-demo-group] Setting offset for partition vms_dlq_hello-topic-0 to the committed offset FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[admin1-virtual-machine:9092 (id: 0 rack: null)], epoch=absent}} 2025-09-23 16:06:00.475 INFO 23424 --- [opic_demo-group] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer_10.13.35.30_2d862b76-7377-4389-affe-872a185bb417, groupId=demo-group] Setting offset for partition hello-topic-0 to the committed offset FetchPosition{offset=12, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[admin1-virtual-machine:9092 (id: 0 rack: null)], epoch=absent}} 尝试处理消息: Hello Kafka! 这是一条会触发死信的消息 2025-09-23 16:06:00.512 WARN 23424 --- [nPool-worker-22] c.t.s.e.p.k.d.DLQEventHandlerWrapper : Event handle failed (attempt 1/2): 模拟业务处理失败 2025-09-23 16:06:00.512 INFO 23424 --- [nPool-worker-22] c.t.s.e.p.k.d.DLQEventHandlerWrapper : Scheduled delay of 1000ms for event 尝试处理消息: Hello Kafka! 这是一条会触发死信的消息 2025-09-23 16:06:01.519 WARN 23424 --- [nPool-worker-22] c.t.s.e.p.k.d.DLQEventHandlerWrapper : Event handle failed (attempt 2/2): 模拟业务处理失败 2025-09-23 16:06:01.519 INFO 23424 --- [nPool-worker-22] c.t.s.e.port.kafka.KafkaEventCenter : not implement yet 2025-09-23 16:06:01.519 INFO 23424 --- [nPool-worker-22] c.t.s.e.p.k.d.DLQEventHandlerWrapper : Event sent to DLQ topic: vms_dlq_hello-topic 2025-09-23 16:06:01.519 ERROR 23424 --- [nPool-worker-22] c.t.s.e.p.k.d.DLQEventHandlerWrapper : Event failed after 2 retries, sent to DLQ请分析以完整日志
09-24
评论 1
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值