在上篇成功安装和测试zk+kafka的基础上,我们集成kafka到springboot微服务中
1:依赖引入(版本对应) <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> </dependency> 2:kafka在application.yml中配置
spring:
kafka:
# kafka代理地址,集群配置以逗号分开
bootstrap-servers: 192.168.1.155:9092
producer:
retries: 0
batch-size: 16384
buffer-memory: 33554432
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
consumer:
#默认分组id
group-id: appServer
auto-offset-reset: earliest
enable-auto-commit: true
auto-commit-interval: 20000
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
3:生产者配置
信息发送至kafka服务器的工具类
@Component
public class KafkaProducers {
private static Logger log = LoggerFactory.getLogger(KafkaProducers.class);
@Autowired
private KafkaTemplate<String, String> kafkaTemplate;
private Gson gson = new GsonBuilder().create();
//发送消息方法
public void send(String topic, Object message) {
log.info("+++++++++++++++++++++ message = {}", gson.toJson(message));
//topic-ideal为主题
kafkaTemplate.send(topic, gson.toJson(message));
}
}
生产者业务实现
/**
* 报警分析数据发送到kafka服务
* @param businessAnalysisBase
*/
@Override
public void sendWarningAnalysisMessage(BusinessAnalysisBase businessAnalysisBase) {
kafkaProducers.send(BusinessConstant.ALARM_ANALYZED_EVENT,businessAnalysisBase);
log.info("信息发送成功");
}
消费者订阅
@Component
public class WarningAnalysisConsumer {
private static Logger log = LoggerFactory.getLogger(WarningAnalysisConsumer.class);
@KafkaListener(topics = {BusinessConstant.ALARM_ANALYZED_EVENT},groupId = "Test")
public void consumer(ConsumerRecord<?, ?> record){
Optional<?> kafkaMessage = Optional.ofNullable(record.value());
if (kafkaMessage.isPresent()) {
Object message = kafkaMessage.get();
log.info("----------------- recordTest =" + record);
log.info("------------------ messageTest =" + message);
}
}
}
20万+

被折叠的 条评论
为什么被折叠?



