一、上下文
《Kafka-Controller选举》博客中分析了Controller是如何选举出来的,且比如会执行onControllerFailover()。接下来让我们看看Controller角色都承担了哪些职责。
二、注册监听器
在从zookeeper读取资源前,注册监听器以获取 broker/topic 的回调。
val childChangeHandlers = Seq(brokerChangeHandler, topicChangeHandler, topicDeletionHandler, logDirEventNotificationHandler,
isrChangeNotificationHandler)
//依次注册这些 Handler
//子节点变化监听
childChangeHandlers.foreach(zkClient.registerZNodeChildChangeHandler)
val nodeChangeHandlers = Seq(preferredReplicaElectionHandler, partitionReassignmentHandler)
//节点变化监听
nodeChangeHandlers.foreach(zkClient.registerZNodeChangeHandlerAndCheckExistence)
三、初始化ControllerContext
1、获取所有的broker
其实就是获取brokers/ids/ 目录下的id来得到broker列表
val curBrokerAndEpochs = zkClient.getAllBrokerAndEpochsInCluster
def getAllBrokerAndEpochsInCluster: Map[Broker, Long] = {
//从 brokers/ids 目录下获取所有 brokerid 且排好序
val brokerIds = getSortedBrokerList
//为每个 brokerid 都封装一个 请求
val getDataRequests = brokerIds.map(brokerId => GetDataRequest(BrokerIdZNode.path(brokerId), ctx = Some(brokerId)))
val getDataResponses = retryRequestsUntilConnected(getDataRequests)
getDataResponses.flatMap { getDataResponse =>
val brokerId = getDataResponse.ctx.get.asInstanceOf[Int]
getDataResponse.resultCode match {
case Code.OK =>
// decode 解读 将json 合 brokerid 构建成 BrokerInfo
//{
// "version":5,
// "host":"localhost",
// "port":9092,
// "jmx_port":9999,
// "timestamp":"2233345666",
// "endpoints":["CLIENT://host1:9092", "REPLICATION://host1:9093"],
// *"rack":"dc1",
// "features": {"feature": {"min_version":1, "first_active_version":2, "max_version":3}}
// }
Some((BrokerIdZNode.decode(brokerId, getDataResponse.data).broker, getDataResponse.stat.getCzxid))
case Code.NONODE => None
case _ => throw getDataResponse.resultException.get
}
}.toMap
}
这一步会得到一个Map[Broker, Long]
Broker中有brokerid,也有这台broker的连接信息、机架信息,此时Controller已经知道自己需要管理的broker有哪些,且可以建立通信
2、判断这些broker是否兼容
val (compatibleBrokerAndEpochs, incompatibleBrokerAndEpochs) = partitionOnFeatureCompatibility(curBrokerAndEpochs)
返回的结果中:
compatibleBrokerAndEpochs 为兼容的borker map
incompatibleBrokerAndEpochs 为不兼容的borker map
那么怎么判断一个broker是否兼容呢?我们看看下面的代码:
private def partitionOnFeatureCompatibility(brokersAndEpochs: Map[Broker, Long]): (Map[Broker, Long], Map[Broker, Long]) = {
//partition 方法:
//一对元素,首先,所有满足谓词p的元素,其次,所有不满足谓词p的元素。
//这两个可迭代集合分别对应filter和filterNot的结果。
//这里提供的默认实现需要遍历该集合两次。严格集合在StrictOptimizedIterableOps中有一个重写版本的分区,只需要一次遍历。
brokersAndEpochs.partition {
case (broker, _) =>
!config.isFeatureVersioningSupported ||
!featureCache.getFeatureOption.exists(
latestFinalizedFeatures =>
BrokerFeatures.hasIncompatibleFeatures(broker.features,
latestFinalizedFeatures.finalizedFeatures().asScala.
map(kv => (kv._1, kv._2.toShort)).toMap))
}
}
def isFeatureVersioningSupported = interBrokerProtocolVersion.isFeatureVersioningSupported
public enum MetadataVersion {
IBP_0_8_0(-1, "0.8.0", ""),
//.....
IBP_2_7_IV0(-1, "2.7", "IV0"),
public boolean isFeatureVersioningSupported() {
return this.isAtLeast(IBP_2_7_IV0);
}
}
MetadataVersion包含不同的Kafka版本,其中2.7就表示该Kafka集群可以兼容的最小版本。如果某个broker的代码版本低于这个版本,就会判定为不兼容。
3、将兼容的broker设置成live状态
controllerContext.setLiveBrokers(compatibleBrokerAndEpochs)
class ControllerContext extends ControllerChannelContext {
private val liveBrokers = mutable.Set.empty[Broker]
private val liveBrokerEpochs = mutable.Map.empty[Int, Long]
def setLiveBrokers(brokerAndEpochs: Map[Broker, Long]): Unit = {
clearLiveBrokers()
addLiveBrokers(brokerAndEpochs)
}
def addLiveBrokers(brokerAndEpochs: Map[Broker, Long]): Unit = {
liveBrokers ++= brokerAndEpochs.keySet
liveBrokerEpochs ++= brokerAndEpochs.map { case (broker, brokerEpoch) => (broker.id, brokerEpoch) }
}
}
其实就是在其内部维护了一个map(liveBrokers ),将存活的broker都放入其中。
4、获取所有的topic
cont

最低0.47元/天 解锁文章
1753

被折叠的 条评论
为什么被折叠?



