zookeeper 3.8.0 cluster docker-compose 部署

由于zookeeper 集群选择leader的策略是少数服从多数原则,所以建议选择奇数个结点

1 compose file 文件

version: '3.7'

services:
  zoo1:
    image: zookeeper:3.8.0
    restart: always
    hostname: zoo1
    container_name: zookeeper-cluster-1
    ports:
      - 12181:2181
      - 18080:8080
    volumes:
      - "/home/data/cluster/zookeeper/zookeeper-1/data:/data"
      - "/home/data/cluster/zookeeper/zookeeper-1/datalog:/datalog"
      - "/home/data/cluster/zookeeper/zookeeper-1/logs:/logs"
    environment:
      ZOO_MY_ID: 1
      ALLOW_ANONYMOUS_LOGIN: "yes"
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
      ZOO_4LW_COMMANDS_WHITELIST: "*" 
    networks:
      brzk-kafka:
        ipv4_address: 172.19.0.11

  zoo2:
    image: zookeeper:3.8.0
    restart: always
    hostname: zoo2
    container_name: zookeeper-cluster-2
    ports:
      - 22181:2181
      - 28080:8080
    volumes:
      - "/home/data/cluster/zookeeper/zookeeper-2/data:/data"
      - "/home/data/cluster/zookeeper/zookeeper-2/datalog:/datalog"
      - "/home/data/cluster/zookeeper/zookeeper-2/logs:/logs"
    environment:
      ZOO_MY_ID: 2
      ALLOW_ANONYMOUS_LOGIN: "yes"
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
      ZOO_4LW_COMMANDS_WHITELIST: "*"
    networks:
      brzk-kafka:
        ipv4_address: 172.19.0.12

  zoo3:
    image: zookeeper:3.8.0
    restart: always
    hostname: zoo3
    container_name: zookeeper-cluster-3
    ports:
      - 32181:2181
      - 38080:8080
    volumes:
      - "/home/data/cluster/zookeeper/zookeeper-3/data:/data"
      - "/home/data/cluster/zookeeper/zookeeper-3/datalog:/datalog"
      - "/home/data/cluster/zookeeper/zookeeper-3/logs:/logs"
    environment:
      ZOO_MY_ID: 3
      ALLOW_ANONYMOUS_LOGIN: "yes"
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
      ZOO_4LW_COMMANDS_WHITELIST: "*"
    networks:
      brzk-kafka:
        ipv4_address: 172.19.0.13
networks:
  brzk-kafka:
        ipam:
            driver: default
            config:
                - subnet: "172.19.0.0/24"

2 运行

docker-compose --project-name myzkcomposeprj up -d

3 查看集群

在某个容器运行客户端:

docker exec --interactive --tty zookeeper-cluster-1 bin/zkCli.sh -server :2181 config| grep ^server

[hostuser@host-machine]$ docker exec --interactive --tty zookeeper-cluster-3 bin/zkCli.sh -server :2181 config | grep ^server
server.1=zoo1:2888:3888:participant;0.0.0.0:2181
server.2=zoo2:2888:3888:participant;0.0.0.0:2181
server.3=zoo3:2888:3888:participant;0.0.0.0:2181

查看结点类型:
docker exec zookeeper-cluster-2 bash -c 'echo srvr | nc localhost 2181' | grep "Mode"

[hostuser@host-machine]$ docker exec zookeeper-cluster-1 bash -c 'echo "srvr" | nc localhost 2181' | grep "Mode"
Mode: follower
[hostuser@host-machine]$ docker exec zookeeper-cluster-2 bash -c 'echo "srvr" | nc localhost 2181' | grep "Mode"
Mode: follower
[hostuser@host-machine]$ docker exec zookeeper-cluster-3 bash -c 'echo "srvr" | nc localhost 2181' | grep "Mode"
Mode: leader

也可以在 host 机器使用映射的端口查看:
echo "srvr" | nc localhost 32181

[hostuser@host-machine]$ echo "stat" | nc localhost 32181 
Zookeeper version: 3.8.0-5a02a05eddb59aee6ac762f7ea82e92a68eb9c0f, built on 2022-02-25 08:49 UTC
Clients:
 /172.19.0.1:36848[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 1/8.5/25
Received: 7
Sent: 6
Connections: 1
Outstanding: 0
Zxid: 0x10000000e
Mode: leader
Node count: 5
Proposal sizes last/min/max: 48/48/48

也可以使用容器里的 bin/zkServer.sh status 查看:

[hostuser@host-machine]$ docker exec zookeeper-cluster-3 bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader

如果环境变量设置了 ZOO_4LW_COMMANDS_WHITELIST: "*" 还可以用 docker exec zookeeper-cluster-2 bash -c 'echo "stat" | nc localhost 2181' | grep "Mode"

[hostuser@host-machine]$ docker exec zookeeper-cluster-2 bash -c 'echo "stat" | nc localhost 2181' | grep "Mode"
Mode: follower

新方法是使用 AdminServer 的 http 接口
http://localhost:8080/commands/stat

root@zoo2:/apache-zookeeper-3.8.0-bin# wget --quiet --output-document=/dev/stdout http://localhost:8080/commands/stat | grep "server_state"
    "server_state" : "follower",

或者在host 机器使用映射的端口

[hostuser@host-machine]$ curl --silent http://localhost:28080/commands/stat | grep "server_state"
    "server_state" : "follower",

4 客户端 zkCli.sh 进入集群

随便进入一个容器结点 docker exec --interactive --tty zookeeper-cluster-2 bash。再使用 zookeeper 客户端 zkCli.sh -server zoo3:2181 进入另外一个结点

root@zoo2:/apache-zookeeper-3.8.0-bin# zkCli.sh -server zoo3:2181
Connecting to zoo3:2181
...
Welcome to ZooKeeper!
JLine support is enabled
2022-05-21 16:38:14,563 [myid:zoo3:2181] - INFO  [main-SendThread(zoo3:2181):o.a.z.ClientCnxn$SendThread@1171] - Opening socket connection to server zoo3/172.19.0.13:2181.

...

[zk: zoo3:2181(CONNECTED) 0] create /test hello
Created /test

在这个结点上创建 znode 并关联字符串 hello

[zk: zoo3:2181(CONNECTED) 0] ls /
[test, zookeeper]
[zk: zoo3:2181(CONNECTED) 1] get /test
hello
[zk: zoo3:2181(CONNECTED) 2] quit

进入另外一个结点查看

root@zoo2:/apache-zookeeper-3.8.0-bin# zkCli.sh -server zoo1:2181
...
[zk: zoo1:2181(CONNECTED) 0] get /test
hello
[zk: zoo1:2181(CONNECTED) 1] ls /
[test, zookeeper]

可以看到 zoo3 创建的 znode

5 Api for Java

code on github

Ref

why-pipeline-content-to-command-nc-wont-work stackoverflow
what-command-could-be-issued-to-check-whether-a-zookeeper-server-is-a-leader serverfault
zookeeper AdminServer Doc

zookeeper_api
ZooKeeper-API-Java-Examples-Watcher
Official Java Example

java-zookeeper

Using docker compose to build zookeeper cluster
ZooKeeper cluster with Docker Compose
setting-up-an-apache-zookeeper-cluster-in-docker
bitnami zookeeper 3.8.0 docker-compose-cluster.yml

### 如何使用 Docker Compose 部署多节点 Zookeeper 集群 以下是通过 `docker-compose` 设置和部署多节点 Zookeeper 集群的详细说明: #### 1. 创建项目目录结构 首先,创建一个用于存储配置文件的工作目录。例如: ```bash mkdir zookeeper-cluster && cd zookeeper-cluster ``` #### 2. 编写 `docker-compose.yml` 文件 在工作目录下创建名为 `docker-compose.yml` 的文件,并编辑其内容如下所示。 ```yaml version: '3' services: zoo1: image: zookeeper:latest container_name: zookeeper_1 restart: always ports: - "2181:2181" environment: ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181 volumes: - ./data/zoo1/data:/data - ./data/zoo1/datalog:/datalog zoo2: image: zookeeper:latest container_name: zookeeper_2 restart: always ports: - "2182:2181" environment: ZOO_MY_ID: 2 ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181 volumes: - ./data/zoo2/data:/data - ./data/zoo2/datalog:/datalog zoo3: image: zookeeper:latest container_name: zookeeper_3 restart: always ports: - "2183:2181" environment: ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181 volumes: - ./data/zoo3/data:/data - ./data/zoo3/datalog:/datalog ``` 此配置定义了一个三节点的 Zookeeper 集群[^1]。每个服务代表一个独立的 Zookeeper 实例,分别映射到不同的主机端口(如 `2181`, `2182`, 和 `2183`)。环境变量 `ZOO_MY_ID` 定义了当前实例的身份 ID,而 `ZOO_SERVERS` 列出了集群中的所有成员及其通信地址[^2]。 #### 3. 准备数据目录 为了持久化存储,在本地磁盘上准备相应的数据目录: ```bash mkdir -p data/zoo{1..3}/{data,datalog} ``` #### 4. 启动集群 运行以下命令来启动 Zookeeper 集群: ```bash docker-compose up -d ``` 这会以后台模式启动所有的容器[^3]。 #### 5. 验证集群状态 可以通过以下方式验证集群是否正常运行: - 查看正在运行的容器列表: ```bash docker ps ``` 确认三个 Zookeeper 容器均已成功启动[^4]。 - 使用 `zkCli.sh` 工具连接至任意一个 Zookeeper 节点并执行测试操作: ```bash docker exec -it zookeeper_1 zkCli.sh -server localhost:2181 ls / ``` 如果返回 `/zookeeper` 或其他默认路径,则表示集群已正确初始化。 --- ###
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值