Spring Boot Kafka

代码地址:https://github.com/huiyiwu/spring-boot-message/spring-boot-kafka

Kafka是一个分布式、分区的、多副本的、多订阅者,基于zookeeper协调的分布式日志系统(也可以当做MQ系统),常见可以用于web/nginx日志、访问日志,消息服务等等.

1. 工具安装

  1. Zookeeper和Kafka存放目录层级不能过深,最好2-3级
  2. 使用Kafka必须同时启动Zookeeper服务和Kafka服务

1.1. 工作下载

下载地址(版本自选):

下载bin版本,第二个版本启动时会报错Zookeeper

1.2.ZooKeeper配置

  1. 解压后将config目录下zoo_sample.cfg修改为zoo.cfg或者复制后修改名称,然后将文件中dataDir=[自己的安装目录]\\data,目的是指定Zookeeper数据存储位置。
  2. 在系统环境变量中添加ZOOKEEPER_HOME,指向Zookeeper根基目录。
  3. 在系统环境变量Path中添加%ZOOKEEPER_HOME%\bin.Win10直接添加,Win7前后需加;.
  4. 可以在命令窗口中执行zkserver测试是否配置成功。

1.3.Kafka

  1. 解压后修改config目录下的server.properties,修改log.dirs,指定日志存放目录.默认值使用时会出现异常。
  2. 修改config目录下的zookeeper.properties,设置dataDir
  3. 进入bin/windows执行 zookeeper-server-start.bat ../../config/zookeeper.properties启动Zookeeper
  4. 新打开命令窗口进入bin/windows,执行kafka-server-start.bat ../../config/server.properties启动Kafka

2. 添加依赖

 <dependency>
      <groupId>org.springframework.kafka</groupId>
      <artifactId>spring-kafka</artifactId>
    </dependency>

3. application.yml

spring:
  kafka:
    bootstrap-servers: 127.0.0.1:9092 #指定kafka server的地址,集群配多个,中间,逗号隔开
    producer:
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
    consumer:
      group-id: default_consumer_group #群组ID
      enable-auto-commit: false
      auto-commit-interval: 1000
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
    listener:
      #当enable-auto-commit为false时生效
      ack-mode: manual
      ack-count: 10 #当ackMode为“COUNT”或“COUNT_TIME”时,偏移提交之间的记录数
      ack-time: 10000 #当ackMode为“TIME”或“COUNT_TIME”时,偏移提交之间的时间(以毫秒为单位)

4. 消息生产者

MsgProducer.java

/**
 * Author: Huchx
 * Date: 2021/1/20 16:18
 */
@Component
public class MsgProducer {

    @Autowired
    private KafkaTemplate<String,Object> kafkaTemplate;

    public void send(String msg){
        ListenableFuture<SendResult<String,Object>> feature = kafkaTemplate.send(AppConstansts.TOPIC,msg);
        feature.addCallback(new ListenableFutureCallback<SendResult<String, Object>>() {
            @Override
            public void onFailure(Throwable throwable) {
                System.out.println("发送消息失败");
            }

            @Override
            public void onSuccess(SendResult<String, Object> stringObjectSendResult) {
                System.out.println("发送消息成功");
            }
        });
    }
}

5. 消息消费者

MsgConsumer.java

/**
 * Author: Huchx
 * Date: 2021/1/20 16:28
 */
@Component
public class MsgConsumer {

    @KafkaListener(topics = AppConstansts.TOPIC,groupId = AppConstansts.TOPIC_GROUP_1)
    public void topic_1(ConsumerRecord<?,?> record, Acknowledgment ack, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic){
        Optional obj = Optional.ofNullable(record.value());
        if (obj.isPresent()){
            Object msg = obj.get();
            System.out.println("Topic_1 消费了Topic:"+topic+",Message:"+msg);
            ack.acknowledge();
        }
    }

    @KafkaListener(topics = AppConstansts.TOPIC,groupId = AppConstansts.TOPIC_GROUP_2)
    public void topic_2(ConsumerRecord<?,?> record, Acknowledgment ack, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic){
        Optional obj = Optional.ofNullable(record.value());
        if (obj.isPresent()){
            Object msg = obj.get();
            System.out.println("Topic_2 消费了Topic:"+topic+",Message:"+msg);
            ack.acknowledge();
        }
    }
}

6. 发送消息

KafkaController.java


/**
 * Author: Huchx
 * Date: 2021/1/20 16:34
 */
@RestController
public class KafkaController {
    @Autowired
    MsgProducer msgProducer;

    @RequestMapping("/send")
    public String send(){
        msgProducer.send("此消息由Controller发送");
        return "send success";
    }

}

7. 测试

启动服务,在浏览器中输入:localhost:8080/send,结果如图:
Zookeeper Send

8. Kafka Streams

8.1. 添加依赖

 <dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka-streams</artifactId>
      <version>2.2.1</version>
    </dependency>

如果kafka-clientskafka-streams版本不一致将导致异常,考虑将版本改为相近:

  <dependency>
      <groupId>org.springframework.kafka</groupId>
      <artifactId>spring-kafka</artifactId>
      <exclusions>
        <exclusion>
          <groupId>org.apache.kafka</groupId>
          <artifactId>kafka-clients</artifactId>
        </exclusion>
      </exclusions>
    </dependency>
    <dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka-streams</artifactId>
      <version>2.2.1</version>
    </dependency>
    <dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka-clients</artifactId>
      <version>2.2.1</version>
    </dependency>

8.2. 添加配置

  1. 在启动类上添加添加@EnableKafkaStreams注解以启用
  2. application.yaml添加spring.kafka.streams.*属性
     streams:
        application-id: kafka-streams-huchx
        properties:
          default:
            key:
              serde: org.apache.kafka.common.serialization.Serdes$StringSerde
            value:
              serde: org.springframework.kafka.support.serializer.JsonSerde
    
  3. 添加Stream配置
    /**
     * Author: Huchx
     * Date: 2021/1/22 13:37
     */
    @Configuration
    public class KafkaStream {
        @Bean
        public KStream<String, String> kStream(StreamsBuilder streamsBuilder){
            KStream<String, String> stream = streamsBuilder.stream(AppConstansts.TOPIC_1, Consumed.with(Serdes.String(), Serdes.String()));
            stream.map((key, value) -> {
                value+="--huchx";
                return new KeyValue<>(key,value);
            }).to(AppConstansts.TOPIC_2);//将其中Topic1的消息经过流处理发送到Topic2
            return stream;
        }
    }
    

8.3. 测试

启动服务,在浏览器中输入localhost:8080/send2,结果如图:
Kafka Stream

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值