Kafka部署全攻略——基于SSL的SASL_SCRAM安全认证实现

本文详细介绍了如何在Kafka中实现基于SSL的SASL_SCRAM安全认证,包括Zookeeper和Kafka的配置步骤,以及测试认证的过程。涉及配置文件修改、依赖包导入、用户新增和脚本调整等关键环节。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

目录

 Zookeeper配置SASL

新建zoo_jaas.conf文件

配置zookeeper.properties文件(各节点均需修改)

导入依赖包

独立版zookeeper

Kafka集成版zookeeper

修改启动脚本或环境变量脚本(各节点均需修改)

独立版zookeeper

Kafka集成版zookeeper

Kafka配置SASL_SCRAM

新建kafka_server_jaas.conf文件

配置server.properties文件(各节点均需修改)

新增scram用户

修改启动脚本(各节点均需修改)

重启zookeeper及kafka

测试SASL_SCRAM/SSL认证

为主题、生产、消费脚本创建认证配置文件

开始测试


以下以kafka集成zookeeper为例。

本文仅介绍基于以上已开启SSL认证的前提下,追加SASL_PLAIN安全认证的介绍,即SASL_SSL安全认证的实现,其中部分配置与仅开启SASL/PLAIN存在差异。

 Zookeeper配置SASL

新建zoo_jaas.conf文件

  • 进入zookeeper配置目录
cd /opt/tools/kafka_2.13-3.2.1/config
  • 新建zoo_jaas.conf文件
vim zoo_jaas.conf

添加如下内容:

Server {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="rbt@123456"
    user_kafka="kafka@123";
};

  • 说明
    • Server.username、Server.password为 Zookeeper 内部通信的用户名和密码,因此保证每个 zk 节点该属性一致即可
    • Server.user_xxx 中 xxx 为自定义用户名,用于 zkClient 连接所使用的用户名和密码,即为 kafka 创建的用户名
    • 最后一个账号密码配置的末尾,不可缺失“;”分号。即该配置文件中需有两个“;”分号。
  • 复制配置文件到每个节点
scp zoo_jaas.conf root@big2:/opt/tools/kafka_2.13-3.2.1/config

配置zookeeper.properties文件(各节点均需修改)

独立版zookeeper的配置文件为zoo.conf,kafka集成版的配置文件为zookeeper.properties。

  • 添加如下配置项
authProvidential.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
zookeeper.sasl.client=true

导入依赖包

独立版zookeeper

因为使用的权限验证类为:org.apache.kafka.common.security.plain.PlainLoginModule,所以需要导入Kafka相关jar包,kafka-clients相关jar包,在部署kafka服务下的libs目录中可以找到,根据kafka不同版本,相关jar包版本会有所变化。所需要jar包如下,在zookeeper下创建目录zk_sasl_lib将jar包放入(目录名与位置可以随便,后续引用指定即可)

 

  • 拷贝jar文件到zk_sasl_lib目录

 

  • 编辑zookeeper环境变量脚本
vim bin/zkEnv.sh

添加如下内容:

for i in /apps/apache-zookeeper-3.8.1-bin/zk_sasl_lib/*.jar;
do
    CLASSPATH="$i:$CLASSPATH"
done

Kafka集成版zookeeper

原生具备,无需配置。

修改启动脚本或环境变量脚本(各节点均需修改)

修改启动脚本或环境变量脚本,使之加载zoo_jaas.conf文件

独立版zookeeper

  • 修改环境
### Kafka KRaft Mode Configuration with SASL_SSL Security Protocol In configuring a Kafka cluster using the KRaft (Kafka Raft) mode alongside the `SASL_SSL` security protocol, several key configurations must be set to ensure secure and reliable operation of the Kafka brokers. The following sections detail these settings. #### Broker Configuration for KRaft Mode For enabling KRaft mode without relying on ZooKeeper, specific properties need adjustment within each broker's configuration file (`server.properties`). These changes facilitate direct management by controllers embedded within the Kafka nodes themselves: - **Enable Controller Quorum**: Set `process.roles=broker,controller`. This allows the node to act both as a broker and partake in leader election processes among controllers. - **Controller Quorum Listener Setup**: Define listeners specifically designated for inter-controller communication via `listener.names=CONTROLLER,SASL_SSL`. - **Inter-Broker Communication Settings**: Ensure that all communications between brokers occur over SSL/TLS secured channels through setting up appropriate listener names like `listeners=SASL_SSL://:9093`, where port numbers can vary based on deployment requirements. - **Security Protocols Specification**: Specify protocols used across different types of connections such as client-broker or controller-to-controller interactions under `security.inter.broker.protocol=SASL_SSL`. ```properties # Example server.properties snippet for KRaft setup process.roles=broker,controller node.id=<unique_node_id> controller.quorum.voters=<comma_separated_list_of_controller_nodes> listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL,SSL:SSL,CONTROLLER:SSL listeners=SASL_SSL://localhost:9093,CONTROLLER://localhost:9094 advertised.listeners=SASL_SSL://<external_ip>:9093 sasl.enabled.mechanisms=SCRAM-SHA-512 inter.broker.listener.name=SASL_SSL security.inter.broker.protocol=SASL_SSL ``` #### Configuring SASL_SSL Authentication Mechanism To enforce strong authentication while maintaining encrypted transport layers, configure the `SASL_SSL` mechanism appropriately: - **Mechanism Selection**: Choose mechanisms supported by your environment; SCRAM is commonly recommended due to its robustness against common attacks compared to PLAIN text methods[^1]. - **JAAS Configuration File**: For clients connecting securely, provide JAAS files specifying credentials necessary for authenticating users attempting access to topics hosted on this cluster. ```java // Sample Java code demonstrating how applications might connect using SASL_SSL Properties props = new Properties(); props.put("bootstrap.servers", "host1.example.com:9093"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); // Enable SASL/SSL props.put("security.protocol", "SASL_SSL"); props.put("sasl.mechanism", "SCRAM-SHA-512"); props.put("sasl.jaas.config", "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"admin\" password=\"password\";"); ``` By adhering closely to these guidelines when deploying Kafka clusters configured for KRaft operations combined with stringent security measures enforced through `SASL_SSL`, administrators gain enhanced control over their messaging platforms' integrity and confidentiality aspects. --related questions-- 1. How does one migrate an existing Kafka cluster from ZooKeeper-based architecture to KRaft? 2. What are best practices regarding securing Kafka deployments beyond just implementing SASL_SSL? 3. Can you explain more about choosing between various SASL mechanisms available in Kafka? 4. In what scenarios would it make sense not to use encryption at rest even though network traffic uses SASL_SSL?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值