spring boot
版本:2.3.12.RELEASE
MySQL
版本:8.0
Kafka
版本:kafka_2.13-2.5.0
本次示例流程:MySQL
中有数据表一张student
,两个字段(id、name
),随机插入100条数据,然后从MySQL
查询数据,发送数据到Kafka
,并消费Kafka
.
pom
依赖
<!--spring-boot-starter-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<!--spring-boot-starter-web-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- spring-kafka -->
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<!-- lombok -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.26</version>
</dependency>
<!--fastjson-->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>2.0.32</version>
</dependency>
<!--HuTool-all-->
<dependency>
<groupId>cn.hutool</groupId>
<artifactId>hutool-all</artifactId>
<version>5.8.26</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>33.2.0-jre</version>
<scope>compile</scope>
</dependency>
<!-- gson -->
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.10.1</version>
</dependency>
<!--Mybatis-Plus 注意版本-->
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-boot-starter</artifactId>
<version>3.5.1</version>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid-spring-boot-starter</artifactId>
<version>1.2.16</version>
</dependency>
<!-- mysql-connector-j -->
<dependency>
<groupId>com.mysql</groupId>
<artifactId>mysql-connector-j</artifactId>
<version>8.0.33</version>
</dependency>
<!--钉钉推送依赖-->
<dependency>
<groupId>com.aliyun</groupId>
<artifactId>alibaba-dingtalk-service-sdk</artifactId>
<version>2.0.0</version>
</dependency>
application.yml
配置
server:
port: 8899
spring:
application:
name: kafkaDemo
# `kafka`的大部分配置写在了配置类里,可按个人需求写在`application.yml`,然后注入配置类。
kafka:
bootstrap-servers: localhost:9092
listener:
type: batch
datasource:
type: com.alibaba.druid.pool.DruidDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/demo?allowMultiQueries=true&serverTimezone=Asia/Shanghai&useSSL=false
username: root
password: 123456
mybatis-plus:
# 实体类Mapper.xml文件所在包
mapper-locations: classpath:mapper/*Mapper.xml
# 指定 MyBatis 别名包扫描路径,用于给包中的类注册别名。注册后,在 Mapper 对应的 XML 文件中可以直接使用类名,无需使用全限定类名。
type-aliases-package: com.sun.pojo
configuration:
# 开启驼峰映射,A_COLUMN > aColumn
map-underscore-to-camel-case: true
# 是否允许单条sql 返回多个数据集
multiple-result-sets-enabled: true
# mybatis-plus的日志输出到控制台
log-impl: org.apache.ibatis.logging.stdout.StdOutImpl
配置类
kafka
的大部分配置写在了配置类里,可按个人需求写在application.yml
,然后注入配置类。
线程池配置,可参考之前文章,点击跳转
发送消息钉钉,模拟异常通知,可参考之前文章,点击跳转
kafka
常量
/**
* topic、消费组 常量
*/
public class KafkaConstant {
/**
* 测试topic
*/
public static final String KAFKA_TOPIC = "student_topic";
/**
* 消费者ID
*/
public static final String ID = "student_group_id";
/**
* 消费组ID
*/
public static final String GROUP_ID = "student_group";
}
kafka
生产者配置
import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.SpringBootConfiguration;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.support.ProducerListener;
import org.springframework.kafka.support.serializer.JsonSerializer;
import java.util.HashMap;
import java.util.Map;
@Slf4j
@SpringBootConfiguration
public class KafkaProducerConfig {
@Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
@Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<String, Object>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
/*
acks=0 : 生产者在成功写入消息之前不会等待任何来自服务器的响应。
acks=1 : 只要集群的首领节点收到消息,生产者就会收到一个来自服务器成功响应。
acks=all :只有当所有参与复制的节点全部收到消息时,生产者才会收到一个来自服务器的成功响应。
开启事务必须设为all
*/
props.put