kafka 测试

本文详细介绍了Kafka集群的安装部署步骤,包括修改server.properties配置、启动服务和监控日志。同时,讨论了Kafka如何利用Zookeeper进行Broker管理及生产者负载均衡,并提供了Java客户端发送消息的示例代码。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

kafka 测试

./bin/kafka-server-start.sh -daemon  config/server.properties   
./bin/kafka-topics.sh --list --zookeeper localhost:2181  
kafka启动(需要先启动zk)下面是查看主题  
./bin/kafka-topics.sh --list --zookeeper localhost:2181   
./bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic cad_convert #删除主题  
修改主题
kafka-topics.sh --alter --zookeeper localhost:2181 --topic hello --config flush.messages=1  
#增加分区  分区对于 Kafka 集群的好处是:实现负载均衡。分区对于消费者来说,可以提高并发度,提高效率。
bin/kafka-topics.sh --alter --zookeeper localhost:2181 --topic cad_convert --partitions 2  


 kafka could not be established. Broker may not be available.  日志是kafka 服务端有问题

安装kafka(集群部署)
1,下载kafka安装包:
http://kafka.apache.org/downloads
2,每台服务器放入安装包,解压
3,cd /config目录下修改server.properties
修改每台服务器server.properties中的:
broker.id=整数,每台不同
listeners=PLAINTEXT://192.168.145.136(当前服务器的ip):9092
zookeeper.connect=192.168.145.139:2181,192.168.145.140:2181:192.168.145.143:2181(zookeeper的集群地址)
4,启动zookeeper集群和每台服务器的kafka服务
启动kafka:sh kafka-server-start.sh -daemon …/config/server.properties
5,到logs目录下查看server.log日志的输出
tail -100f server.log
6,停止kafka服务:
sh kafka-server-stop

(3)zookeeper上注册的节点信息
cluster, controller, controller_epoch, brokers,admin, isr_change_notification, consumers, latest_producer_id_block, config

controller – 控制节点
brokers – kafka集群的broker信息 。 topic
consumer ids/owners/offsets

如何集群/原理:
1.在Kafka的设计中,选择了使用Zookeeper来进行所有Broker的管理,体现在zookeeper上会有一个专门用来进行Broker服务器列表记录的点,节点路径为/brokers/ids
每个Broker服务器在启动时,都会到Zookeeper上进行注册,即创建/brokers/ids/[0-N]的节点,然后写入IP,端口等信息,Broker创建的是临时节点,所有一旦Broker上线或者下线,对应Broker节点也就被删除了,因此我们可以通过zookeeper上Broker节点的变化来动态表征Broker服务器的可用性,Kafka的Topic也类似于这种方式。
2.生产者负载均衡
生产者需要将消息合理的发送到分布式Broker上,这就面临如何进行生产者负载均衡问题。
对于生产者的负载均衡,Kafka支持传统的4层负载均衡,zookeeper同时也支持zookeeper方式来实现负载均衡。

使用zookeeper进行负载均衡
很简单,生产者通过监听zookeeper上Broker节点感知Broker,Topic的状态,变更,来实现动态负载均衡机制,当然这个机制Kafka已经结合zookeeper实现了。

在这里插入图片描述
如上图所示,一个典型的 Kafka 体系架构包括若干 Producer(消息生产者),若干 broker(作为 Kafka 节点的服务器),若干 Consumer(Group),以及一个 ZooKeeper 集群。Kafka通过 ZooKeeper 管理集群配置、选举 Leader 以及在 consumer group 发生变化时进行 Rebalance(即消费者负载均衡,在下一课介绍)。Producer 使用 push(推)模式将消息发布到 broker,Consumer 使用 pull(拉)模式从 broker 订阅并消费消息。

java :code 负载处理消息
发送端配置:

kafka_broker list

kafka.broker_list_env=
kafka.broker_list= 192.168.1.52:9092
/** CAD转换事件主题 */
public static final String CAD_CONVERT = “cad_convert”;
controller:
// CAD文件需要做转换
if (webdavFileInfo.getFiletype() == GlobalConstants.WebdavFileType.CAD) {
LuckyThreadPool.getInstance().getThreadPool().execute(() -> {
Kafka.publish(KafkaTopics.CAD_CONVERT, webdavFileInfo);
});
}

@Component
public class Kafka {

    private static final Logger LOGGER = LoggerFactory.getLogger(Kafka.class);

    private static String BROKER_LIST_ENV;

    @Value("${kafka.broker_list_env:null}")
    public void setInBrokerListEnv(String brokerListEnv) {
        BROKER_LIST_ENV = brokerListEnv;
    }

    private static String BROKER_LIST;

    @Value("${kafka.broker_list:null}")
    public void setInBrokerList(String brokerList) {
        BROKER_LIST = brokerList;
    }

    private static String GROUP_ID;

    @Value("${kafka.group_id:null}")
    public void setGroupId(String groupId) {
        GROUP_ID = groupId;
    }

    private static volatile Set<String> TOPICS;

    @Value("${kafka.topics:null}")
    public void setInTopics(String inTopics) {
        TOPICS = TOPICS == null ? new HashSet<>() : TOPICS;
        for (String top : inTopics.split(",")) {
            String[] ts = top.split(":");
            String topic = ts[0];
            if (!StringUtils.isEmpty(topic) && !"null".equals(topic)) {
                TOPICS.add(topic);
            }
        }
    }


    private static final String NULL = "null";

    public static final String INNER = "INNER";

    private static KProducer Producer = null;


    @Autowired
    private MsgListener[] listeners;

    private static Kafka kafka;

    @PostConstruct
    public void setListener() {
        kafka = this;
        kafka.listeners = this.listeners;
    }


    public static void addTopics(Collection<String> topics) {
        if (TOPICS == null) {
            TOPICS = new HashSet<>();
        }
        TOPICS.addAll(topics);
    }


    public static boolean init() {
        try {

            if (kafka.listeners == null || kafka.listeners.length == 0) {
                return false;
            }

            GROUP_ID = NULL.equals(GROUP_ID) ? System.getenv("SERVICE_NAME") : GROUP_ID;

            // 初始化生产者
            String brokerList = EnvUtils.getVal(BROKER_LIST_ENV, BROKER_LIST);
            if (notNull(brokerList)) {
                LOGGER.info("正在初始化消息生产者...");
                Producer = new KProducer(brokerList);
            }

            // 初始化消费者
            if (notNull(brokerList) && TOPICS != null && !TOPICS.isEmpty()) {
                LOGGER.info("正在初始化消息消费者...");
                KConsumer consumer = new KConsumer(brokerList, GROUP_ID, TOPICS);
                Stevedore.executeIfAbsent(INNER, consumer);
                Stevedore.addListenerTo(INNER, kafka.listeners);
            }

            return true;
        } catch (Exception e) {
            LOGGER.error("kafka 初始化失败 --> " + e.getMessage());
        }
        return false;
    }


    private static boolean notNull(String data) {
        return !NULL.equals(data) && !StringUtils.isEmpty(data);
    }


    public static void addListenerTo(String key, MsgListener listener) {
        Stevedore.addListenerTo(key, listener);
    }

    public static boolean publish(String topic, Object data) {
        if (null != data) {
            String msg;
            if (!(data instanceof String)) {
                msg = JsonUtil.toJson(data);
            } else {
                msg = (String) data;
            }
            return Producer != null && Producer.send(topic, msg);
        }
        return true;
    }

    public static boolean hi() {
        return true;
    }

    private static class Stevedore {

        private static final Map<String, KConsumer> CONSUMER_MAP = new HashMap<>();
        private static final ExecutorService WORKER = Executors.newFixedThreadPool(2);

        static void executeIfAbsent(String key, KConsumer consumer) {
            if (!CONSUMER_MAP.containsKey(key)) {
                CONSUMER_MAP.put(key, consumer);
                WORKER.execute(consumer.init());
            }
        }

        static void addListenerTo(String key, MsgListener... listener) {
            KConsumer consumer = CONSUMER_MAP.get(key);
            if (consumer != null) consumer.addListener(Arrays.asList(listener));
        }
    }


}

生成端才需要
class KProducer {

    private static final Logger LOGGER = LoggerFactory.getLogger(KProducer.class);

    private Producer<String, String> producer = null;

    private String brokerList;

    private KProducer() {
    }

    KProducer(String brokerList) {
        this.brokerList = brokerList;
        // 进行初始化
        init();
    }

    private Producer<String, String> init() {
        if (producer == null) {
            Properties props = new Properties();
            //此处配置的是kafka的端口
            props.put("bootstrap.servers", brokerList);
            //配置value的序列化类
            props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
            //配置key的序列化类
            props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
            props.put("acks", "all");

            producer = new KafkaProducer<>(props);
        }
        return producer;
    }

    boolean send(String topic, String data) {

        LOGGER.info("正在推送主题----->" + topic);

        // 启动debug级别日志
        LOGGER.debug("kafka producer :\ntopic: " + topic + "\ndata: " + data);
        final StringBuilder msg = new StringBuilder();
        final CountDownLatch latch = new CountDownLatch(1);
        producer.send(new ProducerRecord<>(topic, data), (recordMetadata, e) -> {
            if (e != null) {
                msg.append(e.getMessage());
            } else {
                LOGGER.debug("topic: " + recordMetadata.topic() + "\noffset: " + recordMetadata.offset() + "\npartition: " + recordMetadata.partition());
            }
            latch.countDown();
        });

        try {
            latch.await(5, TimeUnit.SECONDS);
        } catch (InterruptedException e) {
            throw new KafkaException("send msg timeout");
        }
        if (msg.length() == 0) {
            return true;
        }
        throw new KafkaException(msg.toString());
    }

    void batchPulish(String topic, List<String> data) {
        for (String content : data) {
            LOGGER.debug("kafka producer :\ntopic: " + topic + "\ndata: " + content);
            producer.send(new ProducerRecord<>(topic, content));
        }
    }

}

消费端:独立服务

# broker list
kafka.broker_list_env=
kafka.broker_list= 192.168.1.52:9092

# topics
kafka.group_id= cad
kafka.topics= cad_convert:1

同样需要 Kafka  这个类,只需要 消费端初始化即可


class KConsumer implements Runnable{

   private static final Logger LOGGER = LoggerFactory.getLogger(KConsumer.class);


   private volatile KafkaConsumer<String, String> consumer;

   private static volatile Set<MsgListener> listeners = new HashSet<>();

   private String zookeeper;

   private String groupId;

   private Set<String> topics;

   private KConsumer() {}

   KConsumer(String zookeeper, String groupId,Set<String> topics) {
      this.zookeeper = zookeeper;
      this.groupId = groupId;
      this.topics = topics;
   }

   KConsumer addListener(Collection<MsgListener> ls){
      listeners.addAll(ls);
      return this;
   }


   KConsumer init(){
      if(consumer==null){
         Properties props = new Properties();
         props.put("bootstrap.servers", this.zookeeper);
         props.put("group.id", this.groupId);
         props.put("enable.auto.commit", "true");
         props.put("auto.offset.reset", "latest");
         props.put("auto.commit.interval.ms", "1000");
         props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
         props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
         consumer = new KafkaConsumer<>(props);
      }
      return this;
    }


   @Override
   public void run() {
       try {
         if(topics!=null && !topics.isEmpty()) {
            this.consumer.subscribe(topics);
            LOGGER.info("正在订阅主题 --> "+topics);
            for (;;){
               ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1));
               for (ConsumerRecord<String, String> record : records) {
                  LOGGER.debug("topic: "+record.topic()+"\noffset: "+record.offset()+"\npartition: "+record.partition());
                  LOGGER.trace(record.value());
                  listeners.forEach(listener -> listener.onMessage(record.topic(),record.value()));
               }
            }
         }
       }finally {
          this.consumer.close();
       }
   }
public final class KafkaTopics {

   /** CAD转换事件主题  */
   public static final String CAD_CONVERT = "cad_convert";

}


@Service
public class KafkaMessageListener implements MsgListener {

    private static final Logger logger = LoggerFactory.getLogger(KafkaMessageListener.class);

    @Autowired
    private CadConvertService cadConvertService;

    @Override
    public void onMessage(String topic, String data) {
        switch (topic) {
            case KafkaTopics.CAD_CONVERT:
                logger.info("收到CAD转换主题:{} , data:{}", topic, data);
                cadConvertService.convertCAD(data);
                break;
            default:
                break;
        }
    }

}

https://wenku.baidu.com/view/340bb2f652e2524de518964bcf84b9d528ea2c10.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

java_leaf

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值