kafka介绍
首先kafka是一种消息中间件,通过topic区分不同的消息队列,如下为生产者与消费者的配置参数,同一个group只有一个消费者可以接受到消息
kafka:
producer:
bootstrap-servers: 10.212.130.44:19092,10.212.130.46:19092,10.212.130.47:19092
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
consumer:
group-id: configuersys
bootstrap-servers: 10.212.130.44:19092,10.212.130.46:19092,10.212.130.47:19092
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
enable-auto-commit: true
配置maven依赖
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>2.2.1.RELEASE</version>
</dependency>
配置kafkalistener监听多个kafka消息
springboot通过申明@Kafkalistener(topics={"XXXX","XXXX"}),就可完成监听的注入
import com.bdcloud.websocket.WebSocketHandler;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;
@Component
public class DualWithKafka {
private static final Logger logger = LoggerFactory.getLogger(DualWithKafka.class);
@Value("${kafatopics}")
private String[] kafatopics;
@KafkaListener(topics = {"#{'${kafatopics}'.split(',')}"})
private void receiverBroadCast(ConsumerRecord<String, String> record) throws Exception{
WebSocketHandler.sendMessage2Jsp(record);
}
@KafkaListener(topics = {"IM_CONSSRPRECEVAL"})
private void receiverBroadCast2(ConsumerRecord<String, String> record) throws Exception{
WebSocketHandler.sendMessage2Jsp(record);
}
}
对应的kafatopics在application.xml中的配置如下:
kafatopics: IM_STASERVSTAT,IM_SUNLIGHT,IM_SSRCLKORBNUM,IM_SATSSRPRECEVAL,IM_SSR_SNDALARM_EMAIL,IM_SSRCASTEVAL,IM_SOFTSTAT,IM_SATLLH,IM_BDSSR,IM_SYSSTAT,IM_SSREVALALARM