ZooKeeper集群部署(容器)

一、ZooKeeper基本概念

ZooKeeper是一个分布式且开源的分布式应用程序的协调服务(管理分布式服务)

ZooKeeper提供的主要功能包括:

  • 配置管理
  • 分布式锁
  • 集群管理

二、ZooKeeper集群部署

1、前置环境准备

1.1 关闭防火墙等限制

systemctl disable firewalld --now
setenforce 0
sed  -i -r 's/SELINUX=[ep].*/SELINUX=disabled/g' /etc/selinux/config

1.2 安装docker环境

(1)安装docker

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum makecache

# yum-utils软件用于提供yum-config-manager程序
yum install -y yum-utils

# 使用yum-config-manager创建docker阿里存储库
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 -y

(2)配置docker国内加速器

mkdir /etc/docker
cat <<EOF > /etc/docker/daemon.json
{
 "registry-mirrors": [
"https://vm1wbfhf.mirror.aliyuncs.com",
"http://f1361db2.m.daocloud.io",
"https://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://mirror.baidubce.com",
"https://ustc-edu-cn.mirror.aliyuncs.com",
"https://registry.cn-hangzhou.aliyuncs.com",
"https://ccr.ccs.tencentyun.com",
"https://hub.daocloud.io",
"https://docker.shootchat.top",
"https://do.nark.eu.org",
"https://dockerproxy.com",
"https://docker.m.daocloud.io",
"https://dockerhub.timeweb.cloud",
"https://docker.shootchat.top",
"https://do.nark.eu.org"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

(3)启动docker并设置开机自启

systemctl enable docker --now
systemctl status docker

1.3 安装docker-compose环境

DOCKER_COMPOSE_VERSION="v2.27.0"
sudo curl -L "https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m)" \
  -o /usr/local/bin/docker-compose
  
chmod +x /usr/local/bin/docker-compose

2、ZooKeeper伪集群部署(可选)

ZooKeeper伪集群指的是,将集群部署到同一台服务器中

2.1 创建目录,添加docker-compose文件

mkdir /data/software/zookeeper-cluster -p
cd /data/software/zookeeper-cluster
vim docker-compose.yml

version: '3.4'

services:
  zk1:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk1
    container_name: zk1
    ports:
    - 2181:2181
    - 2888:2888
    - 3888:3888
    volumes:
    - "./data/zk1-data:/data"
    - "./datalog/zk1-datalog:/datalog"
    - "./logs/zk1-logs:/logs"
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    networks:
      - zookeeper-net

  zk2:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk2
    container_name: zk2
    ports:
    - 22181:2181
    - 22888:2888
    - 23888:3888
    volumes:
    - "./data/zk2-data:/data"
    - "./datalog/zk2-datalog:/datalog"
    - "./logs/zk2-logs:/logs"
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    networks:
      - zookeeper-net

  zk3:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk3
    container_name: zk3
    ports:
    - 32181:2181
    - 32888:2888
    - 33888:3888
    volumes:
    - "./data/zk3-data:/data"
    - "./datalog/zk3-datalog:/datalog"
    - "./logs/zk3-logs:/logs"
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    networks:
      - zookeeper-net
networks:
  zookeeper-net:
    driver: bridge

2.2 启动zk集群:

cd /data/software/zookeeper-cluster
docker-compose up -d
docker-compose logs -f 

3、ZooKeeper集群部署(可选)

3.1 集群环境说明

序号IP地址主机名称
116.32.15.116zk1
216.32.15.200zk2
316.32.15.201zk3

3.2 zk1主机相关操作

(1)创建目录,添加docker-compose文件

mkdir /data/software/zookeeper -p
cd /data/software/zookeeper
vim docker-compose.yml

version: '3.4'

services:
  zk1:                                  # 三个节点对应不同名称 [ zk2 | zk3 ]
    image: zookeeper:3.4.14
    restart: always
    hostname: zk1                       # 三个节点对应不同名称 [ zk2 | zk3 ]
    container_name: zk1                 # 三个节点对应不同名称 [ zk2 | zk3 ]
    network_mode: "host"
    volumes:
      - "./data:/data"
      - "./datalog:/datalog"
      - "./logs:/logs"
    environment:
      ZOO_MY_ID: 1                      # 三个节点对应不同ID [ 2 | 3 ]
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
      JVMFLAGS: "-Xmx1024m -Xms521m"
    extra_hosts:                        # 设置容器hosts为宿主机IP地址
      - "zk1:16.32.15.116"
      - "zk2:16.32.15.200"
      - "zk3:16.32.15.201"
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    mem_limit: 2g                         # 内存硬限制
    mem_reservation: 1500m                # 内存软限制
    logging:                              # 日志大小限制
      driver: json-file
      options:
        max-size: "50m"
        max-file: "10"

(2)启动zk1容器:

cd /data/software/zookeeper
docker-compose up -d
docker-compose logs -f 

3.3 zk2主机相关操作

(1)创建目录,添加docker-compose文件

mkdir /data/software/zookeeper -p
cd /data/software/zookeeper
vim docker-compose.yml

version: '3.4'

services:
  zk2:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk2
    container_name: zk2
    network_mode: "host"
    volumes:
      - "./data:/data"
      - "./datalog:/datalog"
      - "./logs:/logs"
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
      JVMFLAGS: "-Xmx1024m -Xms521m"
    extra_hosts:                        # 设置容器hosts为宿主机IP地址
      - "zk1:16.32.15.116"
      - "zk2:16.32.15.200"
      - "zk3:16.32.15.201"
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    mem_limit: 2g                         # 内存硬限制
    mem_reservation: 1500m                # 内存软限制
    logging:                              # 日志大小限制
      driver: json-file
      options:
        max-size: "50m"
        max-file: "10"

(2)启动zk2容器:

cd /data/software/zookeeper
docker-compose up -d
docker-compose logs -f 

3.4 zk3主机相关操作

(1)创建目录,添加docker-compose文件

mkdir /data/software/zookeeper -p
cd /data/software/zookeeper
vim docker-compose.yml

version: '3.4'

services:
  zk3:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk3
    container_name: zk3
    network_mode: "host"
    volumes:
      - "./data:/data"
      - "./datalog:/datalog"
      - "./logs:/logs"
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
      JVMFLAGS: "-Xmx1024m -Xms521m"
    extra_hosts:                        # 设置容器hosts为宿主机IP地址
      - "zk1:16.32.15.116"
      - "zk2:16.32.15.200"
      - "zk3:16.32.15.201"
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    mem_limit: 2g                         # 内存硬限制
    mem_reservation: 1500m                # 内存软限制
    logging:                              # 日志大小限制
      driver: json-file
      options:
        max-size: "50m"
        max-file: "10"       

(2)启动zk3容器:

cd /data/software/zookeeper
docker-compose up -d
docker-compose logs -f 

三、ZooKeeper集群验证

1、查看集群角色

yum -y install nc
zkList=(16.32.15.116 16.32.15.200 16.32.15.201)
for zkhost in ${zkList[@]};do zkMode=$(echo stat | nc ${zkhost} 2181 | grep Mode);echo [${zkhost}] ${zkMode};done

在这里插入图片描述

2、数据同步测试

# zk1主机创建数据
docker exec -it zk1 bin/zkCli.sh
create /test "QIN TEST 666...."

# zk2主机查看数据
docker exec -it zk2 bin/zkCli.sh
get /test

在这里插入图片描述

3、选举leader测试

  1. 查看leader主机IP地址
zkList=(16.32.15.116 16.32.15.200 16.32.15.201)
for zkhost in ${zkList[@]};do zkMode=$(echo stat | nc ${zkhost} 2181 | grep Mode);echo [${zkhost}] ${zkMode};done
  1. 将leader主机停止掉模仿服务器宕机
# leader主机操作(zk2)
cd /data/software/zookeeper
docker-compose down
  1. 查看选举出新的leader主机
zkList=(16.32.15.116 16.32.15.200 16.32.15.201)
for zkhost in ${zkList[@]};do zkMode=$(echo stat | nc ${zkhost} 2181 | grep Mode);echo [${zkhost}] ${zkMode};done
### Kafka 和 ZooKeeper 集群部署教程及配置指南 #### 使用 Docker 进行 Kafka 与 Zookeeper集群部署 对于希望简化环境设置并提高可移植性的开发者来说,Docker 提供了一种理想的方式来进行 Kafka 及其依赖组件 Zookeeper集群部署。通过容器技术,可以实现一致性和便捷性,在任何支持 Docker 的环境中都能迅速复制相同的服务架构。 #### 准备工作 确保本地机器已安装 Docker 并能够正常运行多容器应用程序。如果尚未完成此步骤,则需先下载并安装最新版本的 Docker Desktop 或适用于服务器的操作系统版 Docker Engine[^1]。 #### 创建 Docker Compose 文件 为了更方便地管理和启动多个关联的服务实例,推荐采用 `docker-compose.yml` 来描述整个系统的构成。下面给出一个简单的例子: ```yaml version: '3' services: zookeeper-1: image: wurstmeister/zookeeper ports: - "2181" environment: ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 zookeeper-2: image: wurstmeister/zookeeper ports: - "2182" environment: ZOO_MY_ID: 2 ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 zookeeper-3: image: wurstmeister/zookeeper ports: - "2183" environment: ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 kafka-broker-1: image: wurstmeister/kafka depends_on: - zookeeper-2 - zookeeper-3 ports: - "9092" environment: KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181,zookeeper-2:2182,zookeeper-3:2183 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092 ``` 上述 YAML 定义了一个由三节点组成的 Zookeeper 集群以及连接到该集群的一个 Kafka 经纪人 (broker)。 #### 启动服务 保存文件后,在命令行工具中切换至包含 `docker-compose.yml` 文件所在的目录,并执行如下指令来初始化所有指定的服务: ```bash docker-compose up -d ``` 这将会以后台模式启动所有的容器,并使它们保持持续运行状态直到手动停止为止。此时应该可以通过访问各自映射出来的端口号来验证各个组件是否成功上线。 #### 测试消息发送接收功能 一旦确认集群已经稳定运作之后,就可以尝试向新建立的主题发布一些测试数据包以检验整体性能表现了。这里建议利用官方提供的 CLI 工具或者第三方图形界面客户端来进行这项操作[^3]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

神奇的海马体

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值