代码:
chooseExcessReplicates
// split nodes into two sets
// moreThanOne contains nodes on rack with more than one replica
// exactlyOne contains the remaining nodes
splitNodesWithRack(candidates, rackMap, moreThanOne, exactlyOne);
// pick one node to delete that favors the delete hint
// otherwise pick one with least space from priSet if it is not empty
// otherwise one node with least space from remains
- 1.删除节点副本多余一个的
- 2.删除心跳最久的
- 3.删除空间最小的
//oldestHeartbeat
if(lastHeartbeat < oldestHeartbeat) {
oldestHeartbeat = lastHeartbeat;
oldestHeartbeatStorage = storage;
}
//HDFS可用空间
if (minSpace > free) {
minSpace = free;
minSpaceStorage = storage;
}
oldestHeartbeat
long oldestHeartbeat =
now() - heartbeatInterval * tolerateHeartbeatMultiplier;
this.tolerateHeartbeatMultiplier = conf.getInt(
DFSConfigKeys.DFS_NAMENODE_TOLERATE_HEARTBEAT_MULTIPLIER_KEY,4);
this.heartbeatInterval = conf.getLong(
DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY,3) * 1000;
public static long now() {
return System.currentTimeMillis();
}
本文介绍了HDFS中删除多余副本的策略,包括优先移除节点上副本超过一个的文件,其次选择心跳时间最久的副本,最后是删除磁盘空间最小的节点上的副本。
655

被折叠的 条评论
为什么被折叠?



