创建
在每个Broker启动时,都会创建一个KafkaController实例,但是只有一个KafkaController会成为Leader,对外提供集群管理服务,选主是基于Zookeeper实现的,即启动时,每个KafkaController会向Zookeeper同一个路径下写入瞬时节点/controller,只有一个会成功,即成为了Leader,其余的成为了Follower。但Leader离线时,其余的Follower重新选主。
ControllerContext初始化
Kafka最重要的概念即为"Topic"和"Partition",这些信息持久化到Zookeeper中,ControllerContext对象保持了Kafka核心的对象信息。
- var liveBrokers: Set[Broker] => 读取/brokers/ids 获取集群中所有Broker信息
var liveBrokerEpoch: Map[Int,Long] => 转换为brokerId <-> epoch[zk] ls /brokers [ids, topics, seqid] [zk] ls /brokers/ids [1] [zk] get /brokers/ids/1 {"listener_security_protocol_map":{"INTERNAL":"PLAINTEXT","EXTERNAL":"SSL"},"endpoints":["INTERNAL://192.168.10.10:9092","EXTERNAL://172.168.10.10:9093"],"jmx_port":9393,"host":"192.168.10.10","timestamp":"1631534356536","port":9092,"version":4}
- var allTopics: Set[String] => 读取/brokers/topics 获取集群中所有Topic信息
- val partitionAssignments: mutable.Map.empty[String,mutable.Map[Int,ReplicaAssignment]] => 存放分区副本信息: 0 -> [1] , 1 -> [1], 2-> [1] 每个topic的每个分区副本所在Broker
=> 该信息为创建Topic时,根据人为或是自动策略分配的ReplicationAssignment[zk] ls /brokers [ids, topics, seqid] [zk] ls /brokers/topics .... .... [zk] get /brokers/topics/app_logs {"version":1,"partitions":{"2":[1],"1":[1],"0":[1]}}
- val partitionLeadershipInfo: mutable.Map.empty[TopicPartition,LeaderIsrAndControllerEpoch] => 每个topic的每个分区副本的Leader及其ISR列表
[zk] ls /brokers [ids, topics, seqid] [zk] get /brokers/topics/app_logs/partitions/0/state {"controller_epoch":1,"leader":1,"version":1,"leader_epoch":0,"isr":[1]}
TopicChangeHandler
KafkaController启动后,会注册TopicChangeHandler,监听/brokers/topics子节点变化。
kafka-topics.sh创建topic时,需要指定partitions / replication-factor 或是 replication-assignment。
- 根据“人为指定”或是“自动生成”,构建replication-assignment,
- 把topic的信息写入到/brokers/topics子节点下,这样就触发了TopicChangeHandler。
- TopicChangeHandler不仅仅监控topic的新增,也监控topic的删除
- 当新增topic时,也会注册Partition变化的监听器
- 新增topic时,会触发onNewPartitionCreation
OnNewPartitionCreation 主要是新建Partition时,完成Partition的状态变迁,由“不存在状态” -> "新建状态" -> "在线状态"。Replica的状态变迁,由“不存在状态” -> “新建状态” -> "在线状态"