​Installing the Ranger Kafka Plug-in

本文档详细介绍了如何在Hortonworks HDP环境中安装和启用Ranger Kafka插件的过程,包括验证插件存在、配置关键参数、启动服务以及创建默认仓库等步骤。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

This section describes how to install and enable the Ranger Kafka plug-in.

  1. The Ranger Kafka plug-in is automatically installed when Kafka is installed. You can verify this plug-in is present by using the following command:

    rpm -qa | grep kafka-plugin
    ranger_2_4_2_0_258-kafka-plugin-0.5.0.2.4.2.0-258.el6.x86_64
  2. Navigate to /usr/hdp/<version>/ranger-kafka-plugin.

    cd /usr/hdp/<version>/ranger-kafka-plugin
  3. Edit the following entries in the install.properties file.

    Table 13.5. Properties to Edit in the install.properties File

    Configuration Property Name

    Default/Example Value

    Required?

    Policy Admin Tool

      
    COMPONENT_INSTALL_DIR_NAME/usr/hdp/2.4.2.0-258/kafkaY

    POLICY_MGR_URL URL for policy admin

    http://<FQDN of ranger admin host>:6080

    Y

    REPOSITORY_NAME The repository name used in Policy Admin Tool for defining policies

    kafkadev

    Y

    Audit Database

      

    SQL_CONNECTOR_JAR Path to SQL connector jar of the DB Flavor selected. The value should be the absolute path including the jar name.

    /usr/share/java/mysql-connector-java.jar (default)

    /usr/share/java/postgresql.jar

    /usr/share/java/sqljdbc4.jar

    /usr/share/java/ojdbc6.jar

    Y

    XAAUDIT.DB.IS_ENABLEDEnable or disable database audit logging.

    FALSE (default), TRUE

    Y

    XAAUDIT.DB.FLAVOUR Specifies the type of database used for audit logging (MYSQL,ORACLE)

    MYSQL (default)

    Y

    XAAUDIT.DB.HOSTNAME Hostname of the audit database server

    localhost

    Y

    XAAUDIT.DB.DATABASE_NAMEAudit database name

    ranger_audit

    Y

    XAAUDIT.DB.USER_NAME Username used for performing audit log inserts (should be same username used in the ranger-admin installation process)

    rangerlogger

    Y

    XAAUDIT.DB.PASSWORD Database password associated with the above database user - for db audit logging

    rangerlogger

    Y

    HDFS Audit

      

    XAAUDIT.HDFS.IS_ENABLED Flag to enable/disable hdfs audit logging. If the hdfs audit logging is turned off, it will not log any access control to hdfs

     

    Y

    XAAUDIT.HDFS.DESTINATION _DIRECTORY HDFS directory where the audit log will be stored

    hdfs://__REPLACE__NAME_NODE_HOST:8020/ (format) hdfs://namenode.mycompany.com:8020/ranger/audit/%app-type%/%time:yyyyMMdd%

    Y

    XAAUDIT.HDFS.LOCAL_BUFFER _DIRECTORY Local directory where the audit log will be saved for intermediate storage

    hdfs://__REPLACE__NAME_NODE_HOST:8020/ (format) /var/log/%app-type%/audit

    Y

    XAAUDIT.HDFS.LOCAL_ARCHIVE _DIRECTORY Local directory where the audit log will be archived after it is moved to hdfs

    __REPLACE__LOG_DIR%app-type%/audit/archive (format) /var/log/%app-type%/audit/archive

    Y

    XAAUDIT.HDFS.DESTINATION_FILEhdfs audit file name (format)

    %hostname%-audit.log (default)

    Y

    XAAUDIT.HDFS.DESTINATION _FLUSH_INTERVAL_SECONDS hdfs audit log file writes are flushed to HDFS at regular flush interval

    900

    Y

    XAAUDIT.HDFS.DESTINATION _ROLLOVER_INTERVAL_SECONDShdfs audit log file is rotated to write to a new file at a rollover interval specified here

    86400

    Y

    XAAUDIT.HDFS.DESTINATION _OPEN_RETRY_INTERVAL_SECONDShdfs audit log open() call is failed, it will be re-tried at this interval

    60

    Y

    XAAUDIT.HDFS.LOCAL_BUFFER _FILE Local filename used to store in audit log (format)

    %time:yyyyMMdd-HHmm.ss%.log (default)

    Y

    XAAUDIT.HDFS.LOCAL_BUFFER _FLUSH_INTERVAL_SECONDS Local audit log file writes are flushed to filesystem at regular flush interval

    60

    Y

    XAAUDIT.HDFS.LOCAL_BUFFER _ROLLOVER_INTERVAL_SECONDSLocal audit log file is rotated to write to a new file at a rollover interval specified here

    600

    Y

    XAAUDIT.HDFS.LOCAL_ARCHIVE _MAX_FILE_COUNT The maximum number of local audit log files that will be kept in the archive directory

    10

    Y

    SSL Information (https connectivity to Policy Admin Tool)

      

    SSL_KEYSTORE_FILE_PATH Java Keystore Path where SSL key for the plug-in is stored. Is used only if SSL is enabled between Policy Admin Tool and Plugin; If SSL is not Enabled, leave the default value as it is - do not set as EMPTY if SSL not used

    /etc/hadoop/conf/ranger-plugin-keystore.jks (default)

    Only if SSL is enabled

    SSL_KEYSTORE_PASSWORDPassword associated with SSL Keystore. Is used only if SSL is enabled between Policy Admin Tool and Plugin; If SSL is not Enabled, leave the default value as it is - do not set as EMPTY if SSL not used

    none (default)

    Only if SSL is enabled

    SSL_TRUSTSTORE_FILE_PATH Java Keystore Path where the trusted certificates are stored for verifying SSL connection to Policy Admin Tool. Is used only if SSL is enabled between Policy Admin Tool and Plugin; If SSL is not Enabled, leave the default value as it is - do not set as EMPTY if SSL not used

    /etc/hadoop/conf/ranger-plugin-truststore.jks (default)

    Only if SSL is enabled

    SSL_TRUSTSTORE_PASSWORDPassword associated with Truststore file. Is used only if SSL is enabled between Policy Admin Tool and Plugin; If SSL is not Enabled, leave the default value as it is - do not set as EMPTY if SSL not used

    none (default)

    Only if SSL is enabled

  4. Enable the Kafka plug-in by running the following commands:

    export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64
    ./enable-kafka-plugin.sh
  5. Enter the following commands to stop/start the Kafka service.

    su kafka -c "/usr/hdp/current/kafka-broker/bin/kafka stop" 
    su kafka -c "/usr/hdp/current/kafka-broker/bin/kafka start"
    
  6. Create the default repo for Kafka with the proper configuration specifying the same repository name as in step 3.

  7. You can verify the plug-in is communicating to Ranger admin via the Audit/plugins tab.

  8. If the plug-in is not able to communicate with Ranger admin, check the property authorizer.class.name in/usr/hdp/2.4.2.0-258/kafka/config/server.properties. The value of the authorizer.class.name should be org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer.

​Installing the Ranger HBase Plug-in

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_installing_manually_book/content/installing_ranger_plugins.html

转载于:https://www.cnblogs.com/felixzh/p/10490274.html

内容概要:本文围绕直流微电网中带有恒功率负载(CPL)的DC/DC升压转换器的稳定控制问题展开研究,提出了一种复合预设性能控制策略。首先,通过精确反馈线性化技术将非线性不确定的DC转换器系统转化为Brunovsky标准型,然后利用非线性扰动观测器评估负载功率的动态变化和输出电压的调节精度。基于反步设计方法,设计了具有预设性能的复合非线性控制器,确保输出电压跟踪误差始终在预定义误差范围内。文章还对比了多种DC/DC转换器控制技术如脉冲调整技术、反馈线性化、滑模控制(SMC)、主动阻尼法和基于无源性的控制,并分析了它们的优缺点。最后,通过数值仿真验证了所提控制器的有效性和优越性。 适合人群:从事电力电子、自动控制领域研究的学者和工程师,以及对先进控制算法感兴趣的研究生及以上学历人员。 使用场景及目标:①适用于需要精确控制输出电压并处理恒功率负载的应用场景;②旨在实现快速稳定的电压跟踪,同时保证系统的鲁棒性和抗干扰能力;③为DC微电网中的功率转换系统提供兼顾瞬态性能和稳态精度的解决方案。 其他说明:文中不仅提供了详细的理论推导和算法实现,还通过Python代码演示了控制策略的具体实现过程,便于读者理解和实践。此外,文章还讨论了不同控制方法的特点和适用范围,为实际工程项目提供了有价值的参考。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值