我们继续进入到org.apache.hadoop.hdfs.NameNodeProxies的函数
public static <T> ProxyAndInfo<T> createProxy(Configuration conf,URI nameNodeUri, Class<T> xface, AtomicBoolean fallbackToSimpleAuth)中,开始分析HA代理部分代码,代码如下:
// HA case
Conf config = new Conf(conf);
T proxy = (T) RetryProxy.create(xface, failoverProxyProvider,
RetryPolicies.failoverOnNetworkException(
RetryPolicies.TRY_ONCE_THEN_FAIL, config.maxFailoverAttempts,
config.maxRetryAttempts, config.failoverSleepBaseMillis,
config.failoverSleepMaxMillis));
Text dtService;
if (failoverProxyProvider.useLogicalURI()) {
dtService = HAUtil.buildTokenServiceForLogicalUri(nameNodeUri,
HdfsConstants.HDFS_URI_SCHEME);
} else {
dtService = SecurityUtil.buildTokenService(
NameNode.getAddress(nameNodeUri));
}
return new ProxyAndInfo<T>(proxy, dtService,
NameNode.getAddress(nameNodeUri));
我们在继续讲解上面的代码前,我们先看看failoverProxyProvider的创建过程,这个变量为dfs.client.failover.proxy.provider加上nameNodeUri.host()类对象,代码如下:
/** Creates the Failover proxy provider instance*/
//这个函数用来创建一个故障转移代理类实例
//conf为配置类对象
//nameNodeUri对象中包含了NameNode服务器url信息
//xface为ClientProtocol.class
//boolean为false
@VisibleForTesting
public static <T> AbstractNNFailoverProxyProvider<T> createFailoverProxyProvider(
Configuration conf, URI nameNodeUri, Class<T> xface, boolean checkPort,
AtomicBoolean fallbackToSimpleAuth) throws IOException {
Class<FailoverProxyProvider<T>> failoverProxyProviderClass = null;
AbstractNNFailoverProxyProvider<T> providerNN;
//如果xface与NamenodeProtocols不是同一个类或者同一个接口,且xface不是NamenodeProtocols的
//父类或父接口,那么就抛出异常。
Preconditions.checkArgument(
xface.isAssignableFrom(NamenodeProtocols.class),
"Interface %s is not a NameNode protocol", xface);
try {
// Obtain the class of the proxy provider
//返回dfs.client.failover.proxy.provider加上nameNodeUri.host()对应类的Class类对象
failoverProxyProviderClass = getFailoverProxyProviderClass(conf,
nameNodeUri);
if (failoverProxyProviderClass == null) {
return null;
}
// Create a proxy provider in