springboot 整合 kafka
0、安装 kafka
-
kafka 运行需要 JDK 支持
-
下载 kafka,https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.2.0/kafka_2.12-2.2.0.tgz
-
解压
tar -xzvf kafka_2.12-2.2.0.tgz
-
修改 kafka 配置文件
KAFKA_HOME/config/server.properties
port=9092 #端口号 host.name=127.0.0.1 #服务器IP地址,修改为自己的服务器IP log.dirs=/usr/local/kafka/log/kafka #日志存放路径,上面创建的目录 zookeeper.connect=localhost:2181 #zookeeper地址和端口,单机配置部署,localhost:2181 delete.topic.enable=true #不设置的话,调用 kafka 的 delete 命令无法真正将 topic 删除,而是显示(marked for deletion)
注意:kafka 的解压包里面自带了 zookeeper
-
修改 zookeeper 配置文件
KAFKA_HOME/config/zookeeper.properties
:dataDir=/usr/local/kafka/zookeeper #zookeeper数据目录 dataLogDir=/usr/local/kafka/log/zookeeper #zookeeper日志目录 clientPort=2181 maxClientCnxns=100 tickTime=2000 initLimit=10 syncLimit=5
1、启动和关闭
启动 zookeeper:
KAFKA_HOME/bin/zookeeper-server-start.sh KAFKA_HONE/config/zookeeper.properties
启动 kafka:
KAFKA_HOME/bin/kafka-server-start.sh KAFKA_HOME/config/server.properties
关闭 kafka:
KAFKA_HOME/bin/kafka-server-stop.sh
关闭 zookeeper:
KAFKA_HOME/bin/zookeeper-server-stop.sh
在 KAFKA_HOME 下新建 start 文件夹,创建以下脚本:
#zookeeper-start.sh
nohup ../bin/zookeeper-server-start.sh ../config/zookeeper.properties > zookeeper.log 2>&1 &
#zookeeper-stop.sh
../bin/zookeeper-server-stop.sh
#kafka-start.sh
nohup ../bin/kafka-server-start.sh ../config/server.properties > kafka.log 2>&1 &
#kafka-stop.sh
../bin/kafka-server-stop.sh
2、shell 命令测试
创建 topic:
KAFKA_HOME/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
查看 topic:
KAFKA_HOME/bin/kafka-topics.sh --list --zookeeper localhost:2181
启动 consumer:
KAFKA_HOME/bin/kafka-console-consumer.sh --bootstrap-server 127.0.0.1:9092 --topic test --from-beginning
启动 producer:
KAFKA_HOME/bin/kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic test
删除 topic:
KAFKA_HOME/bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic test
3、springboot 整合 kafka
kafka 版本:2.12_2.2.0
maven 依赖:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>2.1.5.RELEASE</version>
</dependency>
springboot 配置文件 application.properties:
#生产者
spring.kafka.producer.bootstrap-servers=127.0.0.1:9092
#消费者
spring.kafka.consumer.bootstrap-servers=127.0.0.1:9092
#对象转JSON
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
spring.kafka.producer.properties.spring.json.type.mapping=userLog:top.changelife.kafka.UserLog
消息实体(省略setter、getter方法):
public class UserLog {
private String username;
private String userid;
private String state;
}
生产消息:
@Component
public class KafkaProducer{
@Autowired
private KafkaTemplate kafkaTemplate;
final static String topic1 = "topic1";
public void send(){
UserLog userLog = new UserLog();
String userId = UUID.randomUUID().toString();
userLog.setUsername("java").setUserid(userId).setState("active");
kafkaTemplate.send(topic1, userLog);
System.out.println("发送消息:" + userLog);
}
}
消费消息:
@Component
@KafkaListener(id = "group1", topics = {KafkaProducer.topic1})
public class KafkaConsumer {
@KafkaHandler(isDefault = true)
public void unknown(Object object) {
System.out.println("default handler:" + object);
}
@KafkaHandler
public void userLog(UserLog userLog) {
System.out.println("userLog:" + userLog);
}
}