springboot for kafka 中的KafkaTemplate类的flush()方法初步探索

SpringBoot与Kafka:KafkaTemplate的flush()方法解析
本文探讨了SpringBoot 2.1.3.RELEASE版本中,KafkaTemplate的flush()方法。该方法用于立即发送所有缓存记录,即使请求可能被阻塞。在事务环境中,不应直接调用flush(),因为它可能导致事务失效。作者通过实践建议,在事务中手动引发异常并尝试在finally块中使用flush()来验证其影响。

前言

  版本:本文采用的springboot版本为2.1.3.RELEASE,对应的springkafka也是属于此版本。

  起因:在学习springboot对kafka 的操作中,碰到过flush()函数,于是想探究下此函数的功能,就有了本文。

风遁!函数追踪术

新建一个springboot的项目,注入spring-kafka的依赖。打开KafkaTemplate的API搜索可以找到flush()方法,

    public void flush() {
        Producer producer = this.getTheProducer();

        try {
            producer.flush();
        } finally {
            this.closeProducer(producer, this.inTransaction());
        }

    }

再点击producer.flush()的flush(),发现是个接口类,再找下它的实现类。

    
  public interface Producer<K, V> extends Closeable {
    。。。。。
    
    /**接口类Producer
     * See {@link KafkaProducer#flush()}
     */
    void flush(); 
    。。。。
}
  /**Producer接口的实现类KafkaProducer的flush()方法
     * Invoking this method makes all buffered records immediately available to send (even if <code>linger.ms</code> is
     * greater than 0) and blocks on the completion of the requests associated with these records. The post-condition
     * of <code>flush()</code> is that any previously sent record will have completed (e.g. <code>Future.isDone() == true</code>).
     * A request is considered completed when it is successfully acknowledged
     * according to the <code>acks</code> configuration you have specified or else it results in an error.
     * <p>
     * Other threads can continue sending records while one thread is blocked waiting for a flush call to complete,
     * however no guarantee is made about the completion of records sent after the flush call begins.
     * <p>
     * This method can be useful when consuming from some input system and producing into Kafka. The <code>flush()</code> call
     * gives a convenient way to ensure all previously sent messages have actually completed.
     * <p>
     * This example shows how to consume from one Kafka topic and produce to another Kafka topic:
     * <pre>
     * {@code
     * for(ConsumerRecord<String, String> record: consumer.poll(100))
     *     producer.send(new ProducerRecord("my-topic", record.key(), record.value());
     * producer.flush();
     * consumer.commit();
     * }
     * </pre>
     *
     * Note that the above example may drop records if the produce request fails. If we want to ensure that this does not occur
     * we need to set <code>retries=&lt;large_number&gt;</code> in our config.
     * </p>
     * <p>
     * Applications don't need to call this method for transactional producers, since the {@link #commitTransaction()} will
     * flush all buffered records before performing the commit. This ensures that all the {@link #send(ProducerRecord)}
     * calls made since the previous {@link #beginTransaction()} are completed before the commit.
     * </p>
     *
     * @throws InterruptException If the thread is interrupted while blocked
     */
    @Override
    public void flush() {
        log.trace("Flushing accumulated records in producer.");
        this.accumulator.beginFlush();
        this.sender.wakeup();
        try {
            this.accumulator.awaitFlushCompletion();
        } catch (InterruptedException e) {
            throw new InterruptException("Flush interrupted.", e);
        }
    }

英语的内容大致是:调用此方法会使所有的缓存记录可以立即被发送,即使跟这些记录有关的请求被堵塞。flush()函数的后置条件是任何先前发送的请求会被完成(渣翻译请见谅,没翻译的部分我看意思也差不多,就懒得翻译了)。

还有最后一段说到,当采用事务时,应用程序不必调用此方法。因为调用的话会在事务执行提交前刷新缓存,使事务失效。不行的朋友可以试试,在执行事务时手动抛出异常,然后在finally中加入flush(),再观察结果。

至于函数内的代码,等之后有空再研究研究,还请各位大神指点批评,谢谢!

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值