flink 在 kerberos on yarn 提交作业时,需要做一些配置处理做认证。
需要在flink-conf.yaml中配置,
如果没有在flink-conf.yaml中进行配置,yarn会拒绝作业的提交。
security.kerberos.krb5-conf.path: /etc/krb5.conf
security.kerberos.login.use-ticket-cache: true
security.kerberos.login.keytab: /data/flink.keytab
security.kerberos.login.principal: flink/api@EXAMPLE.COM
hbase sql sink on kerberos HBase | Apache Flink
需要设定 'properties.hbase.security.authentication' = 'kerberos'
hbase ddl
CREATE TABLE hbase_sink
(
rowkey STRING,
cf ROW<brSTRING>,
PRIMARY KEY (rowkey) NOT ENFORCED)
WITH (
'connector' = 'hbase-2.2',
'table-name' = 'hbase_table',
'sink.buffer-flush.max-rows' = '5',
'zookeeper.quorum' = 'ip:2181',
'properties.hbase.security.authentication' = 'kerberos')
提交任务还会出错,如下
ERROR org.apache.zookeeper.client.ZooKeeperSaslClient [] - An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state.
zookeeper sasl开启导致的。
修改 flink-conf.yaml
# security.kerberos.login.contexts: Client,KafkaClient
#==============================================================================
# ZK Security Configuration
#==============================================================================
zookeeper.sasl.disable: true
# Below configurations are applicable if ZK ensemble is configured for security
# Override below configuration to provide custom ZK service name if configured
# zookeeper.sasl.service-name: zookeeper
# The configuration below must match one of the values set in "security.kerberos.login.contexts"
# zookeeper.sasl.login-context-name: Client
关闭sasl认证。
重新提交作业