elasticsearch的节点Node在启动的时候(也就是在start()方法中)开始加入集群,并准备参与选举。
在Node的start()方法中,会调用ZenDiscovery的startInitialJoin()方法开始加入集群并准备进行参与选举。
@Override
public void startInitialJoin() {
// start the join thread from a cluster state update. See {@link JoinThreadControl} for details.
clusterService.submitStateUpdateTask("initial_join", new LocalClusterUpdateTask() {
@Override
public ClusterTasksResult<LocalClusterUpdateTask> execute(ClusterState currentState) throws Exception {
// do the join on a different thread, the DiscoveryService waits for 30s anyhow till it is discovered
joinThreadControl.startNewThreadIfNotRunning();
return unchanged();
}
@Override
public void onFailure(String source, @org.elasticsearch.common.Nullable Exception e) {
logger.warn("failed to start initial join process", e);
}
});
}
这里会向clusterService提交一个任务Task准备放入线程池中执行,这里的实现是一个LocalClusterUpdateTask,重写了execute(),其中实则是调用了joinThreadControl的startNewThreadIfNotRunning()。joinThreadControl作为ZenDiscovery的一个内部类,主要用来保证执行加入集群线程的唯一性。
private final AtomicBoolean running = new AtomicBoolean(false);
private final AtomicReference<Thread> currentJoinThread = new AtomicReference<>();
joinThreadControl通过一个AtomicBoolean类型的running来表示Node加入集群与选举的开始与结束,而currentJoinThread则通过AtomicReference来保证工作线程的可见性与唯一性。
在startNewThreadIfNotRunning()方法中先通过joinThreadActive()方法确保当前并没有工作线程在运行。
public boolean joinThreadActive() {
Thread currentThread = currentJoinThread.get();
return running.get() && currentThread != null && currentThread.isAlive();
}
如果没有,那么就新建一个工作线程准备开始加入集群。
public void startNewThreadIfNotRunning() {
ClusterService.assertClusterStateThread();
if (joinThreadActive()) {
return;
}
threadPool.generic().execute(new Runnable() {
@Override
public void run() {
Thread currentThread = Thread.currentThread();
if (!currentJoinThread.compareAndSet(null, currentThread)) {
return;
}
while (running.get() && joinThreadActive(currentThread)) {
try {
innerJoinCluster();
return;
} catch (Exception e) {
logger.error("unexpected error while joining cluster, trying again", e);
// Because we catch any exception here, we want to know in
// tests if an uncaught exception got to this point and the test infra uncaught exception
// leak detection can catch this. In practise no uncaught exception should leak
assert ExceptionsHelper.reThrowIfNotNull(e);
}
}
// cleaning the current thread from currentJoinThread is done by explicit calls.
}
});
}
在其run()方法中,通过cas更新curentJoinThread,并在服务结束之前,并且当前线程是ZenDiscovery的工作线程之时,不断再循环中执行innerJoinCluster()。
InnerJoinCluster()分为两步,在本节点还未选择出自己所认定master节点之前,会一直不断循环调用findMaster()去得到自己认定的master节点。
while (masterNode == null && joinThreadControl.joinThreadActive(currentThread)) {
masterNode = findMaster();
}
在findMaster()中,会根据pingAndWait()方法去获取集群内其他节点关于选举的ping请求的回复,具体的获取在之前的文章已经详细解释。
final DiscoveryNode localNode = clusterService.localNode();
// add our selves
assert fullPingResponses.stream().map(ZenPing.PingResponse::node)
.filter(n -> n.equals(localNode)).findAny().isPresent() == false;
fullPingResponses.add(new ZenPing.PingResponse(localNode, null, clusterService.state()));
接下来会过滤掉本地节点的数据,重新加入当前本地节点的选举数据,由于刚加入选举的缘故,所以其还并没有master节点的选择。
final List<ZenPing.PingResponse> pingResponses = filterPingResponses(fullPingResponses, masterElectionIgnoreNonMasters, logger);
在默认情况下,masterElectionIgnoreNonMasters为false,因此data节点的选举数据也会被考虑到选举的过程中。
List<DiscoveryNode> activeMasters = new ArrayList<>();
for (ZenPing.PingResponse pingResponse : pingResponses) {
// We can't include the local node in pingMasters list, otherwise we may up electing ourselves without
// any check / verifications from other nodes in ZenDiscover#innerJoinCluster()
if (pingResponse.master() != null && !localNode.equals(pingResponse.master())) {
activeMasters.add(pingResponse.master());
}
}
// nodes discovered during pinging
List<ElectMasterService.MasterCandidate> masterCandidates = new ArrayList<>();
for (ZenPing.PingResponse pingResponse : pingResponses) {
if (pingResponse.node().isMasterNode()) {
masterCandidates.add(new ElectMasterService.MasterCandidate(pingResponse.node(), pingResponse.getClusterStateVersion()));
}
}
之后,遍历所有收到的ping请求的节点结果,取出所有已经选出的不为自己的master节点,加入到activeMasters中。再遍历所有ping请求的节点结果,将所有属性master为true的节点加入到候选人数组masterCandidates当中。
之后选择如果activeMasters不为空,说明该集群中已经存在master节点,那么就在activeMasterss中选择id最小的节点作为自己投票选择的master节点,并返回。
public DiscoveryNode tieBreakActiveMasters(Collection<DiscoveryNode> activeMasters) {
return activeMasters.stream().min(ElectMasterService::compareNodes).get();
}
private static int compareNodes(DiscoveryNode o1, DiscoveryNode o2) {
if (o1.isMasterNode() && !o2.isMasterNode()) {
return -1;
}
if (!o1.isMasterNode() && o2.isMasterNode()) {
return 1;
}
return o1.getId().compareTo(o2.getId());
}
如果activeMasters为空,说明此时集群还并没有选举出master节点。
if (electMaster.hasEnoughCandidates(masterCandidates)) {
final ElectMasterService.MasterCandidate winner = electMaster.electMaster(masterCandidates);
logger.trace("candidate {} won election", winner);
return winner.getNode();
}
那么首先判断当前masterCandidates数组中的候选节点个数是否已经大于最小开始选举接节点数量(默认为-1),如果大于,则通过electMaster的electMaster()方法获取自己所投票的master节点并返回。
public MasterCandidate electMaster(Collection<MasterCandidate> candidates) {
assert hasEnoughCandidates(candidates);
List<MasterCandidate> sortedCandidates = new ArrayList<>(candidates);
sortedCandidates.sort(MasterCandidate::compare);
return sortedCandidates.get(0);
}
public static int compare(MasterCandidate c1, MasterCandidate c2) {
// we explicitly swap c1 and c2 here. the code expects "better" is lower in a sorted
// list, so if c2 has a higher cluster state version, it needs to come first.
int ret = Long.compare(c2.clusterStateVersion, c1.clusterStateVersion);
if (ret == 0) {
ret = compareNodes(c1.getNode(), c2.getNode());
}
return ret;
}
private static int compareNodes(DiscoveryNode o1, DiscoveryNode o2) {
if (o1.isMasterNode() && !o2.isMasterNode()) {
return -1;
}
if (!o1.isMasterNode() && o2.isMasterNode()) {
return 1;
}
return o1.getId().compareTo(o2.getId());
}
这里所要投票的master节点的选择则是从候选节点数组中选择id最小版本最新的节点。
这样,当前节点所要在选举中投票的master节点已经被选出。
此时存在两种情况,如果当前节点所选择的master节点正式自己,则会正式准备成为master节点,但是前提是他必须收到集群中别的节点的投票中有半数以上投向自己。
那么便会开始调用waitToBeElectedAsMaster()方法准备接收别的节点的投票结果等待投自己的超过半数以成为master节点。
final CountDownLatch done = new CountDownLatch(1);
final ElectionCallback wrapperCallback = new ElectionCallback() {
@Override
public void onElectedAsMaster(ClusterState state) {
done.countDown();
callback.onElectedAsMaster(state);
}
@Override
public void onFailure(Throwable t) {
done.countDown();
callback.onFailure(t);
}
};
首先会生成一个CountDoneLatch用来等待别的集群的投票和等待的timeout,并生成一个结束阻塞的callback用来在完成时结束阻塞并去完成一个节点正式成为master节点要做的流程。
synchronized (this) {
assert electionContext != null : "waitToBeElectedAsMaster is called we are not accumulating joins";
myElectionContext = electionContext;
electionContext.onAttemptToBeElected(requiredMasterJoins, wrapperCallback);
checkPendingJoinsAndElectIfNeeded();
}
try {
if (done.await(timeValue.millis(), TimeUnit.MILLISECONDS)) {
// callback handles everything
return;
}
} catch (InterruptedException e) {
}
CountDownLatch进入await在timeout的时间限制内等待别的节点的投票。
在ZenDiscovery的构造方法中就已经根据路径discovery/zen/join注册了相应的requestHandler,其中会触发MemberShipListener的onJoin()方法,并调用handleJoinRequest()方法,在这个方法里,主要会对于发送join请求(也就是选举投票)的节点进行验证,验证通过之后将会根据nodeJoinController的handleJoinRequest()方法对成为master节点的要求的投票给自己的节点数量masterJoinsCount加一,并判断是否可以成为master节点。
public synchronized void handleJoinRequest(final DiscoveryNode node, final MembershipAction.JoinCallback callback) {
if (electionContext != null) {
electionContext.addIncomingJoin(node, callback);
checkPendingJoinsAndElectIfNeeded();
} else {
clusterService.submitStateUpdateTask("zen-disco-node-join",
node, ClusterStateTaskConfig.build(Priority.URGENT),
joinTaskExecutor, new JoinTaskListener(callback, logger));
}
}
private synchronized void checkPendingJoinsAndElectIfNeeded() {
assert electionContext != null : "election check requested but no active context";
final int pendingMasterJoins = electionContext.getPendingMasterJoinsCount();
if (electionContext.isEnoughPendingJoins(pendingMasterJoins) == false) {
if (logger.isTraceEnabled()) {
logger.trace("not enough joins for election. Got [{}], required [{}]", pendingMasterJoins,
electionContext.requiredMasterJoins);
}
} else {
if (logger.isTraceEnabled()) {
logger.trace("have enough joins for election. Got [{}], required [{}]", pendingMasterJoins,
electionContext.requiredMasterJoins);
}
electionContext.closeAndBecomeMaster();
electionContext = null; // clear this out so future joins won't be accumulated
}
}
在checkPendingJoinsAndElectIfNeed()方法中,如果已经接收到的join请求也就是投票自己的节点数量已经超过集群中节点数量的半数,那么调用closeAndBecomeMaster()方法结束本次选举正式成为master节点。
public void markThreadAsDoneAndStartNew(Thread joinThread) {
ClusterService.assertClusterStateThread();
if (!markThreadAsDone(joinThread)) {
return;
}
startNewThreadIfNotRunning();
}
如果在规定的timeout里,并没有收到足够的投票,那么说明本节点的选举失败。则会回到通过markThreadAsDoneAndStartNew()关闭当前线程,并重新启动一个线程在startNewThreadIfNotRunning()方法中开始下一次循环中,继续上述选举的流程参与选举。
如果通过findMaster()得到的所要选举的节点并不是自己,则会通过joinElectedMaster()方法向所选举成为master的节点发送自己的投票。
while (true) {
try {
logger.trace("joining master {}", masterNode);
membership.sendJoinRequestBlocking(masterNode, clusterService.localNode(), joinTimeout);
return true;
} catch (Exception e) {
final Throwable unwrap = ExceptionsHelper.unwrapCause(e);
if (unwrap instanceof NotMasterException) {
if (++joinAttempt == this.joinRetryAttempts) {
logger.info("failed to send join request to master [{}], reason [{}], tried [{}] times", masterNode, ExceptionsHelper.detailedMessage(e), joinAttempt);
return false;
} else {
logger.trace("master {} failed with [{}]. retrying... (attempts done: [{}])", masterNode, ExceptionsHelper.detailedMessage(e), joinAttempt);
}
} else {
if (logger.isTraceEnabled()) {
logger.trace((Supplier<?>) () -> new ParameterizedMessage("failed to send join request to master [{}]", masterNode), e);
} else {
logger.info("failed to send join request to master [{}], reason [{}]", masterNode, ExceptionsHelper.detailedMessage(e));
}
return false;
}
}
try {
Thread.sleep(this.joinRetryDelay.millis());
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
发送的目标节点的路径也正是前文中所提到的discovery/zen/join。如果在有限次数内成功,就说明当前节点所投票的目标节点成功成为master节点,本次选举也宣告完成。
而若是在有限次数内都没有成功(选举的节点没有收到超过半数master选票或者因为种种原因关闭)则会返回false,会和之前试图成为master节点失败一样,重新开启一个线程去参与下一轮选举。如果成功,直接退出即可,有关跟master同步的master fault detection已经在clusterService中被开启。