docker-swarm部署mongo分片集群

本文详细介绍了如何在 Docker Swarm 环境中搭建 MongoDB 分片集群,包括无授权模式创建集群、配置分片信息、生成及分发密钥文件、添加用户信息以及以授权模式重启集群的全过程。
部署运行你感兴趣的模型镜像

概述

  • 本文主要介绍在docker-swarm环境下搭建mongo分片集群。
  • 本文以授权模式创建集群,但是如果之间启动授权的脚本,将无法创建用户。需要在无授权模式下把用户创建好,然后再以授权模式重启。(这两种模式启动脚本不同,但是挂载同一个文件目录)

架构图

  • 共三个节点:breakpad(主服务器),bpcluster,bogon

前置步骤

  • 安装docker
  • 初始化swarm集群
    • docker swarm init

部署步骤

前面三步执行完集群就可以使用了,不需要授权登录可不用执行后面4个步骤

  1. 创建目录
  2. 部署服务(无授权模式)
  3. 配置分片信息
  4. 生成keyfile文件,并修改权限
  5. 拷贝keyfile到其他节点
  6. 添加用户信息
  7. 重启服务(授权模式)

1. 创建目录

所有服务器执行before-deploy.sh

#!/bin/bash

DIR=/data/fates
DATA_PATH="${DIR}/mongo"
PWD='1qaz2wsx!@#'

DATA_DIR_LIST=('config' 'shard1' 'shard2' 'shard3' 'script')

function check_directory() {
  if [ ! -d "${DATA_PATH}" ]; then
    echo "create directory: ${DATA_PATH}"
    echo ${PWD} | sudo -S mkdir -p ${DATA_PATH}
  else
    echo "directory ${DATA_PATH} already exists."
  fi


  cd "${DATA_PATH}"

  for SUB_DIR in ${DATA_DIR_LIST[@]}
  do
    if [ ! -d "${DATA_PATH}/${SUB_DIR}" ]; then
      echo "create directory: ${DATA_PATH}/${SUB_DIR}"
      echo ${PWD} | sudo -S mkdir -p "${DATA_PATH}/${SUB_DIR}"
    else
      echo "directory: ${DATA_PATH}/${SUB_DIR} already exists."
    fi
  done

  echo ${PWD} | sudo -S chown -R $USER:$USER "${DATA_PATH}"
}

check_directory

复制代码

2. 无授权模式启动mongo集群

  • 这一步还没有授权,无需登录就可以操作,用于创建用户

主服务器下创建fate-mongo.yaml,并执行以下脚本(注意根据自己的机器名称修改constraints属性)

docker stack deploy -c fates-mongo.yaml fates-mongo
复制代码
version: '3.4'
services:
  shard1-server1:
    image: mongo:4.0.5
    # --shardsvr: 这个参数仅仅只是将默认的27017端口改为27018,如果指定--port参数,可用不需要这个参数
    # --directoryperdb:每个数据库使用单独的文件夹
    command: mongod --shardsvr --directoryperdb --replSet shard1
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard1:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  shard2-server1:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard2
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard2:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  shard3-server1:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard3
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard3:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  shard1-server2:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard1
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard1:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  shard2-server2:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard2
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard2:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  shard3-server2:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard3
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard3:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  shard1-server3:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard1
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard1:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  shard2-server3:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard2
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard2:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  shard3-server3:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard3
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard3:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  config1:
    image: mongo:4.0.5
    # --configsvr: 这个参数仅仅是将默认端口由27017改为27019, 如果指定--port可不添加该参数
    command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/config:/data/configdb
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  config2:
    image: mongo:4.0.5
    command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/config:/data/configdb
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  config3:
    image: mongo:4.0.5
    command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/config:/data/configdb
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  mongos:
    image: mongo:4.0.5
    # mongo3.6版默认绑定IP为127.0.0.1,此处绑定0.0.0.0是允许其他容器或主机可以访问
    command: mongos --configdb fates-mongo-config/config1:27019,config2:27019,config3:27019 --bind_ip 0.0.0.0 --port 27017
    networks:
      - mongo
    ports:
      - 27017:27017
    volumes:
      - /etc/localtime:/etc/localtime
    depends_on:
      - config1
      - config2
      - config3
    deploy:
      restart_policy:
        condition: on-failure
      mode: global

networks:
  mongo:
    driver: overlay
    # 如果外部已经创建好网络,下面这句话放开
    # external: true

复制代码

3. 配置分片信息

# 添加配置服务器
docker exec -it $(docker ps | grep "config" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id: \"fates-mongo-config\",configsvr: true, members: [{ _id : 0, host : \"config1:27019\" },{ _id : 1, host : \"config2:27019\" }, { _id : 2, host : \"config3:27019\" }]})' | mongo --port 27019"

# 添加分片服务器
docker exec -it $(docker ps | grep "shard1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard1\", members: [{ _id : 0, host : \"shard1-server1:27018\" },{ _id : 1, host : \"shard1-server2:27018\" },{ _id : 2, host : \"shard1-server3:27018\", arbiterOnly: true }]})' | mongo --port 27018"
docker exec -it $(docker ps | grep "shard2" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard2\", members: [{ _id : 0, host : \"shard2-server1:27018\" },{ _id : 1, host : \"shard2-server2:27018\" },{ _id : 2, host : \"shard3-server3:27018\", arbiterOnly: true }]})' | mongo --port 27018"
docker exec -it $(docker ps | grep "shard3" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard3\", members: [{ _id : 0, host : \"shard3-server1:27018\" },{ _id : 1, host : \"shard2-server2:27018\" },{ _id : 2, host : \"shard3-server3:27018\", arbiterOnly: true }]})' | mongo --port 27018"

# 添加分片集群到mongos中
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard1/shard1-server1:27018,shard1-server2:27018,shard1-server3:27018\")' | mongo "
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard1/shard2-server1:27018,shard2-server2:27018,shard2-server3:27018\")' | mongo "
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard1/shard3-server1:27018,shard3-server2:27018,shard3-server3:27018\")' | mongo "
复制代码

4. 生成密钥文件

执行前面三步,已经可用确保mongo分片集群启动成功可使用了,如果不需要加授权,后面的步骤不用看。

主服务器执行generate-keyfile.sh

#!/bin/bash

DATA_PATH=/data/fates/mongo
PWD='1qaz2wsx!@#'

function check_directory() {
  if [ ! -d "${DATA_PATH}" ]; then
    echo "directory: ${DATA_PATH} not exists, please run before-depoly.sh."
  fi
}

function generate_keyfile() {
  cd "${DATA_PATH}/script"
  if [ ! -f "${DATA_PATH}/script/mongo-keyfile" ]; then
    echo 'create mongo-keyfile.'
    openssl rand -base64 756 -out mongo-keyfile
    echo "${PWD}" | sudo -S chmod 600 mongo-keyfile
    echo "${PWD}" | sudo -S chown 999 mongo-keyfile
  else
    echo 'mongo-keyfile already exists.'
  fi
}

check_directory
generate_keyfile

复制代码

5. 拷贝密钥文件到其他服务器的script目录下

在刚才生成keyfile文件的服务器上执行拷贝(注意-p参数,保留前面修改的权限)

sudo scp -p /data/fates/mongo/script/mongo-keyfile username@server2:/data/fates/mongo/script
sduo scp -p /data/fates/mongo/script/mongo-keyfile username@server3:/data/fates/mongo/script
复制代码

6. 添加用户信息

主服务器下执行add-user.sh

脚本给的用户名和密码都是root,权限为root权限。可自定义修改

docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo -e 'use admin\n db.createUser({user:\"root\",pwd:\"root\",roles:[{role:\"root\",db:\"admin\"}]})' | mongo"
复制代码

7. 创建docker启动的yaml脚本文件(授权)

  • 这一步授权登录,需要输入上一步创建的用户名和密码才可操作

主服务器下创建fate-mongo-key.yaml,然后再以授权模式重启(脚本不同,挂载路径使用之前的)

docker stack deploy -c fates-mongo-key.yaml fates-mongo
复制代码
version: '3.4'
services:
  shard1-server1:
    image: mongo:4.0.5
    # --shardsvr: 这个参数仅仅只是将默认的27017端口改为27018,如果指定--port参数,可用不需要这个参数
    # --directoryperdb:每个数据库使用单独的文件夹
    command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard1:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  shard2-server1:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard2:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  shard3-server1:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard3:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  shard1-server2:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard1:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  shard2-server2:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard2:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  shard3-server2:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard3:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  shard1-server3:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard1:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  shard2-server3:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard2:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  shard3-server3:
    image: mongo:4.0.5
    command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/shard3:/data/db
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  config1:
    image: mongo:4.0.5
    # --configsvr: 这个参数仅仅是将默认端口由27017改为27019, 如果指定--port可不添加该参数
    command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/config:/data/configdb
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bpcluster
  config2:
    image: mongo:4.0.5
    command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/config:/data/configdb
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==bogon
  config3:
    image: mongo:4.0.5
    command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/config:/data/configdb
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==breakpad
  mongos:
    image: mongo:4.0.5
    # mongo3.6版默认绑定IP为127.0.0.1,此处绑定0.0.0.0是允许其他容器或主机可以访问
    command: mongos --configdb fates-mongo-config/config1:27019,config2:27019,config3:27019 --bind_ip 0.0.0.0 --port 27017  --keyFile /data/mongo-keyfile
    networks:
      - mongo
    ports:
      - 27017:27017
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
    depends_on:
      - config1
      - config2
      - config3
    deploy:
      restart_policy:
        condition: on-failure
      mode: global

networks:
  mongo:
    driver: overlay
    # 如果外部已经创建好网络,下面这句话放开
    # external: true
复制代码

遇到的问题

启动失败

通过docker service logs name查看日志,发现配置文件找不到,因为没有挂载进容器内部

config3启动失败

配置文件中挂载路径写错了

容器启动成功,但是连接失败,被拒绝

只执行了启动容器的脚本,后续的配置都没有设置(第3步)

mongo-keyfile没权限:error opening file: /data/mongo-keyfile: Permission denied

  • mongo-keyfile文件必须修改所有者为999, 权限为600

addShard失败

  • 必须等mongos启动完毕才能执行
  • 根据服务器名称,自动修改脚本里面constraints的属性

分片全部完成后发现数据只保存在一个分片上:

分片的一个chrunk默认200MB,数据量太小,只用一个chunk就够。可修改小这个参数验证效果

您可能感兴趣的与本文相关的镜像

Stable-Diffusion-3.5

Stable-Diffusion-3.5

图片生成
Stable-Diffusion

Stable Diffusion 3.5 (SD 3.5) 是由 Stability AI 推出的新一代文本到图像生成模型,相比 3.0 版本,它提升了图像质量、运行速度和硬件效率

部署 MongoDB 集群通常指的是设置一个副本集(Replica Set),其核心在于配置多个 MongoDB 实例并通过 `mongod --replSet` 命令参数指定相同的副本集名称。使用 `docker-compose` 部署时,可以定义多个服务,并确保它们共享相同的副本集名称以及网络配置。 以下是一个典型的 `docker-compose` 配置示例,用于部署一个包含三个节点的 MongoDB 副本集: ### 服务定义 ```yaml version: '3' services: mongo1: image: mongo:5.0.5 container_name: mongo1 restart: always ports: - "27017:27017" command: mongod --replSet rs0 --bind_ip_all volumes: - ./mongo1/data:/data/db - ./mongo1/keyfile:/mongodb.key environment: TZ: Asia/Shanghai entrypoint: - bash - -c - | chmod 400 /mongodb.key chown 999:999 /mongodb.key exec docker-entrypoint.sh $$@ mongo2: image: mongo:5.0.5 container_name: mongo2 restart: always ports: - "27018:27017" command: mongod --replSet rs0 --bind_ip_all volumes: - ./mongo2/data:/data/db - ./mongo2/keyfile:/mongodb.key environment: TZ: Asia/Shanghai entrypoint: - bash - -c - | chmod 400 /mongodb.key chown 999:999 /mongodb.key exec docker-entrypoint.sh $$@ mongo3: image: mongo:5.0.5 container_name: mongo3 restart: always ports: - "27019:27017" command: mongod --replSet rs0 --bind_ip_all volumes: - ./mongo3/data:/data/db - ./mongo3/keyfile:/mongodb.key environment: TZ: Asia/Shanghai entrypoint: - bash - -c - | chmod 400 /mongodb.key chown 999:999 /mongodb.key exec docker-entrypoint.sh $$@ ``` ### 初始化副本集 部署完成后,需要通过 MongoDB Shell 初始化副本集。执行以下命令: ```bash docker exec -it mongo1 mongosh ``` 在 MongoDB Shell 中运行以下命令来初始化副本集: ```javascript rs.initiate({ _id: "rs0", members: [ { _id: 0, host: "mongo1:27017" }, { _id: 1, host: "mongo2:27017" }, { _id: 2, host: "mongo3:27017" } ] }) ``` ### 认证与安全性 在某些情况下,可能需要启用认证功能。为此,可以在 `mongod` 启动命令中添加 `--auth` 参数,并在 `mongosh` 中创建用户[^4]。例如: ```javascript use admin db.createUser({ user: "admin", pwd: "securepassword", roles: [{ role: "root", db: "admin" }] }) ``` ### 文件权限与密钥文件 确保密钥文件的权限设置为 `400`,并且所有权设置为 `999:999`(MongoDB 容器内部的默认用户)[^1]。这有助于防止权限问题,尤其是在多节点环境中。 ### 相关问题 1. 如何在 Docker 中配置 MongoDB 分片集群? 2. 如何在 MongoDB 副本集中添加新的节点? 3. 如何在 MongoDB 容器中启用 TLS 加密通信? 4. 如何在 MongoDB 容器中进行数据备份与恢复? 5. 如何在 MongoDB 容器中配置用户权限与认证?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值