1、问题现象
8a在往带Kerberos认证的hdfs执行导出操作时,遇到报错:
select * from test.t1 into outfile 'hdp://hdfs@**********:900/t1.txt' outfilemode by hdfs null_value '' columns terminated by '\t' writemode by overwrites;
ERROR 1733 (HY000): (GBA-01EX-700) Gbase general error: Get token operation failed with error - test1 tries to renew a token (token for test1: HDFS_DELEGATION_TOKEN owner=test1@HADOOP.COM, renewer=hdfs, realUser=, issueDate=1697597444925, maxDate=1698202244925, sequenceNumber=17, masterKeyId=73) with non-matching renewer hdfs at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:509) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rene
hadoop的namenode日志里有相同错误:
test1 tries to renew a token (token for test1: HDFS_DELEGATION_TOKEN owner=test1@HADOOP.COM, renewer=hdfs, realUser=, issueDate=1697607984741, maxDate=1698212784741, sequenceNumber=33, masterKeyId=77) with non-matching renewer hdfs
2、问题分析
Hadoop的配置文件hdfs-site.xml中有这几个配置需要注意:
<property>
<name>dfs.namenode.keytab.file</name>
<value>/home/hdfs/hadoop/conf/hdfs.keytab</value>
</property>
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_HOST@HADOOP.COM</value>
</property>
<!-- datanode security config -->
<property>
<name>dfs.datanode.keytab.file</name>
<value>/home/hdfs/hadoop/conf/hdfs.keytab</value>
</property>
<property>
<name>dfs.datanode.kerberos.principal</name>
<value>hdfs/_HOST@HADOOP.COM</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.principal</name>
<value>HTTP/_HOST@HADOOP.COM</value>
</property>
配置文件里规定了,如果使用namenode和datanode,认证使用的principal必须是hdfs/xxx@HADOOP格式,本示例中使用的principal是test@HADOOP.COM,显然不满足hdfs配置文件中的限制。在使用test@HADOOP.COM进行导出时,hdfs端代码会检测‘test ’ 与‘hdfs’不匹配,从而抛出上述异常。
问题解决办法
- Kerberos重新创建principal,符合格式hdfs/xxx@HADOOP.COM(具体参考现场配置)或hdfs@HADOOP.COM。
- 导出keytab文件,更改文件权限使其他用户能够对keytab文件进行写操作。
- 对应更改8a配置文件,改成新生成的principal和keytab。