docker-compose部署kafka、SASL模式(密码校验模式)_system

version: "3"
services:
  zookeeper:
    image: wurstmeister/zookeeper
    container_name: zookeeper
    restart: always
    ports:
      - 2181:2181
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
  kafka:
    image: wurstmeister/kafka
    restart: always
    container_name: kafka
    depends_on:
      - zookeeper
    ports:
      - 9092:9092
    environment:
      KAFKA_BROKER_ID: 0
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.20:9092
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      KAFKA_PORT: 9092 
      KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "false"
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_HEAP_OPTS: "-Xmx512M -Xmx512M"
  kafka-ui:
    image: provectuslabs/kafka-ui:latest
    container_name: kafka-ui
    restart: always
    ports:
        - 10010:8080
    environment:
        - DYNAMIC_CONFIG_ENABLED=true
        - SERVER_SERVLET_CONTEXT_PATH=/ui-kafka
        - KAFKA_CLUSTERS_0_NAME=local
        - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
        - KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=PLAINTEXT
    depends_on:
      - zookeeper
      - kafka


kafka-ui

地址:http://localhost:10010/ui-kafka/

java生产者

        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
            <version>2.9.0</version>
        </dependency>


import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;

import java.util.Properties;

public class KafkaProducerTest {

    public static void main(String[] args) {


        Properties properties = new Properties();
 

 
        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
  
        //	KEY: 是kafka用于做消息投递计算具体投递到对应的主题的哪一个partition而需要的
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        //	VALUE: 实际发送消息的内容
        properties.put(ProducerConfig.VALUE_SERIALIZER_C
### 使用 Docker Compose 部署 Kafka 并启用 SASL/PLAIN 认证 为了使用 Docker Compose 启动带有 SASL/PLAIN 身份验证的 Apache Kafka 实例,需要对 `docker-compose.yml` 文件以及 Kafka 和 ZooKeeper 的配置文件进行适当修改。 #### 修改 `docker-compose.yml` 创建或编辑现有的 `docker-compose.yml` 文件来定义服务: ```yaml version: '2' services: zookeeper: image: confluentinc/cp-zookeeper:latest environment: ZOOKEEPER_CLIENT_PORT: 2181 KAFKA_BROKER_ID: 1 ALLOW_PLAINTEXT_LISTENER: "yes" volumes: - ./zoo.cfg:/etc/kafka/zookeeper.properties kafka: image: confluentinc/cp-kafka:latest depends_on: - zookeeper ports: - "9092:9092" - "9093:9093" environment: KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,SASL_PLAINTEXT://kafka:9093 KAFJA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: SASL_PLAINTEXT KAFKA_SASL_ENABLED_MECHANISMS: PLAIN KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 volumes: - ./server.properties:/etc/kafka/server.properties ``` 此设置指定了两个监听器:一个是用于内部通信的未加密连接 (`PLAINTEXT`);另一个则是启用了 SASL/PLAIN 协议的安全连接(`SASL_PLAINTEXT`). 此外还设置了跨代理间的通讯协议为安全模式,并指定授权类以便实施访问控制[^1]. #### 创建必要的配置文件 对于上述 YAML 中提到的服务端口映射和其他环境变量,在实际环境中还需要准备相应的配置文件如 `server.properties`. 特别是在这个场景下要确保包含了如下几项: - 设置 JAAS 登录模块路径指向自定义 jaas.conf 文件位置. ```properties listeners=SASL_PLAINTEXT://:9093,PLAINTEXT://:9092 advertised.listeners=SASL_PLAINTEXT://your.host.name:9093,PLAINTEXT://your.host.name:9092 listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT sasl.enabled.mechanisms=PLAIN sasl.mechanism.inter.broker.protocol=PLAIN security.inter.broker.protocol=SASL_PLAINTEXT authorizer.class.name=kafka.security.authorizer.AclAuthorizer allow.everyone.if.no.acl.found=true super.users=User:admin;User:someuser ``` 另外还需提供一个名为 `jaas.conf` 的 Java Authentication and Authorization Service (JAAS) 配置文件给 Kafka 容器使用. ```plaintext KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="password"; }; Client { org.apache.kafka.common.security.plain.PlainLoginModule required username="someuser" password="another_password"; }; ``` 最后一步就是通过挂载的方式让这些额外资源能够被容器读取到. 将它们放置于主机上的合适目录并将该路径作为卷装载至对应的容器中即可完成整个过程. 一旦完成了以上所有准备工作之后就可以按照常规方式运行命令启动集群了:`docker-compose up -d`.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值