IBM developer:Setting up the Kafka plugin for Ranger

本文介绍如何在Ambari界面中启用并配置Ranger的Kafka插件,包括设置Kerberos、创建主题授权策略及使用示例。确保默认策略用户有效,通过Ranger界面启用Kafka插件,并同步策略。主题创建需通过消费者或生产者自动创建,推荐策略设置为所有主题的生产和消费权限。示例展示了创建主题、启动生产者和消费者的过程。

Follow these steps to enable and configure the Kafka plugin for Ranger.

Before you begin

The default policy user (ambari-qa) used for a plug-in should be an existing valid user on the system which is configured for Ranger.

Procedure

  1. From the Ambari web interface, select the Ranger service and then open the Configs tab. Select the Ranger Plugin tab.
    Screen capture of the Ambari Ranger Configs dialog.
  2. In the Ranger Plugin section, enable the Kafka Ranger Plugin, and then click Save.
    Note
    1. The Kafka Ranger plugin requires Kerberos. You will see a warning if you try to enable Kafka on an non-Kerberized cluster. For details see the Kafka Plugin section of the Ranger FAQ.
    2. Topic creation can be authorized via Ranger, but only if the topic is being auto-created by consumers or producers. The recommended policy setup to authorize topic auto-creation for producers or consumers is as follows:
      1. Create a policy where resource is all topics, i.e. *.
      2. For producers, create a policy item under this policy which grants both Produce and Configure permissions to the relevant user or user-groups.
      3. For consumers, create a policy item under this policy which grants both Consume and Configure permissions to the relevant user or user-groups.

Example

The following is an example of how to use the Kafka Ranger plugin for authorization:
  1. Ensure that the default policy created when the plugin is enabled is enabled and synced.
  2. Ensure that Kerberos tickets are not expired by using the kinit command as the kafka user.
  3. Run the following command to create a topic in Kafka. Run the command as the kafka user and from the /usr/iop/current/kafka-broker/ directory:
    bin/kafka-topics.sh --create --zookeeper hostname.fyre.ibm.com:2181 --replication-factor 1 
    --partitions 1 --topic test-topic
    复制
  4. Create files named producer.properties and consumer.properties, each with a single line with the value security.protocol=SASL_PLAINTEXT.
  5. Run the following command to start the producer. Run the command as the kafka user and from the /usr/iop/current/kafka-broker/ directory:
    bin/kafka-console-producer.sh --broker-list <cluster url>:6667 --topic test-topic 
    --producer.config <path>/producer.properties
    复制
  6. In another window, run the following command to start the consumer. Run the command as the root user and from the /usr/iop/current/kafka-broker/ directory:
    bin/kafka-console-consumer.sh --topic test-topic --from-beginning --bootstrap-server <cluster url>:6667 
    --consumer.config <path>/consumer.properties
    复制
  7. In the producer window, write some test messages and observe that they appear in the consumer window.
  8. Disable the policy and observe that error messages show up in both windows that they can no longer connect.
  9. Re-enable the policy and observe that messages can be sent and received properly again.

转载于:https://www.cnblogs.com/felixzh/p/10474815.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值