文章目录
Docker Compose
简介
介绍
Docker Compose 可以轻松、高效的管理容器,它是一个用于 定义和运行多个容器的应用程序工具
以前都是单个docker run 启动容器,通过Compose,您可以使用YML文件来配置应用程序需要的所有服务。然后,使用一个命令,就可以从YML文件配置中创建并启动所有服务
-
Compose是docker官方的开源项目,需要安装! -
Dockerfile让程序在任何地方运行!web服务 ,redis , mysql , nginx......要编写多个dockerfile, 一个个build-run
-
Compose-
version: '2.0' services: web: build: ports: - "5000:5000" volumes: - .:/code - logvolume01:/var/log links: - redis redis: image: redis volumes: logvolume01: {} -
docker compose即便是100个微服务,只要yaml正确的,也可以直接docker compose up一键启动
-
compose 重要概念
- services : 容器,应用。(web,redis,nginx…)
- 项目project : 一组关联的容器
安装
# dokcer-compose && compose安装
# compose
curl -L https://get.daocloud.io/docker/compose/releases/download/1.29.0/docker-compose-`uname -s`-`uname -m` > /usr/bin/compose && chmod +x /usr/bin/compose
# docker-compose
curl -L https://get.daocloud.io/docker/compose/releases/download/1.29.0/docker-compose-`uname -s`-`uname -m` > /usr/bin/docker-compose && chmod +x /usr/bin/docker-compose
# 查看是否安装成功
[root@localhost ~]# docker-compose version
docker-compose version 1.29.1, build c34c88b2
docker-py version: 5.0.0
CPython version: 3.7.10
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019
体验
第一步:
# 为项目创建目录
[root@localhost ~]# mkdir composetest
[root@localhost ~]# cd composetest/
# 创建一个文件app.py
[root@localhost composetest]# vim app.py
# 复制以下内容
import time
import redis
from flask import Flask
app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)
def get_hit_count():
retries = 5
while True:
try:
return cache.incr('hits')
except redis.exceptions.ConnectionError as exc:
if retries == 0:
raise exc
retries -= 1
time.sleep(0.5)
@app.route('/')
def hello():
count = get_hit_count()
return 'Hello World! I have been seen {} times.\n'.format(count)
# 创建文件 requirements.txt,内容如下
[root@localhost composetest]# cat requirements.txt
flask
redis
第二步:
# 创建 Dockerfile 文件
[root@localhost composetest]# cat Dockerfile
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "app.py"]
第三步:
# 文件中定义服务
# 在您的项目目录中创建一个名为 docker-compose.yml 的文件,然后粘贴以下内容:
[root@localhost composetest]# cat docker-compose.yml
version: "3.9"
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"
第四步:
# 生成和运行 conpose
[root@localhost composetest]# docker-compose up
# 查看运行是否成功,两个应用一个redis一个web
[root@localhost ~]# docker ps
CONTAINER ID IMAGE STATUS PORTS NAMES
f16ca0f0c1bb redis:alpine Up 56 seconds 6379/tcp composetest_redis_1
4295ad96c264 composetest_web Up 56 seconds 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp composetest_web_1
# 验证,访问5000端口
[root@localhost ~]# curl localhost:5000
Hello World! I have been seen 1 times.
[root@localhost ~]# curl localhost:5000
Hello World! I have been seen 2 times.
[root@localhost ~]# curl localhost:5000
Hello World! I have been seen 3 times.
......
# docker images
# 写在 yaml 文件的服务,都会下载成镜像,不需要手动去 run
[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
composetest_web latest 556578a61cbc 16 minutes ago 182MB
redis alpine efb4fa30f1cf 2 days ago 32.3MB
python 3.7-alpine ec8ed031b5be 2 days ago 41.8MB
网络规则
[root@localhost ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
740eaf7fdcb1 bridge bridge local
5373ad2435b1 composetest_default bridge local
7637df824f28 host host local
c5e661ec0ea9 none null local
# 多个服务组成的一个项目,项目中的内容都在同个网络下面,可以通过域名(name) 访问
# 查看conposetest_default网络的细节
# web服务和redis服务在同一容器内,并且网络划分好了
[root@localhost ~]# docker network inspect composetest_default
"Containers": {
"4295ad96c264d80d1e3b5ae2fb85c8280aa1384c9b38b2b90759507007806104": {
"Name": "composetest_web_1",
"EndpointID": "b1c0a8cd3cf2d3f8dadcc8047933d225f87449e7fbaa787386a8cce57ec6745f",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"f16ca0f0c1bbbc014ea5e9363964ac5e867e7537fe10f19f2e22b55c316a2f17": {
"Name": "composetest_redis_1",
"EndpointID": "86a4bf089030bc263e22a7bef43694e4b404bf5c80199d5d129ace9f3403cf2b",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
# 如果在同一个网络下,我们可以直接通过域名访问
停止docker-compose
[root@localhost composetest]# docker-compose down (必须要在yaml文件的同级目录下使用)
Stopping composetest_redis_1 ... done
Stopping composetest_web_1 ... done
Removing composetest_redis_1 ... done
Removing composetest_web_1 ... done
Removing network composetest_default
# 或者直接在启动的终端下 ctrl + c
yaml规则
官方地址:https://docs.docker.com/compose/compose-file/compose-file-v3/
# docker-compose.yml 核心!
# 可分成三层 1、版本 2、服务 3、其他配置(网络,卷,全局配置)
version: "3.9"
services:
服务名1:
#服务配置
build: .
ports:
- "5000:5000"
......
服务名2:
......
volumes:
networks:
configs:
开源项目
博客
官方地址:https://docs.docker.com/samples/wordpress/
# 创建项目目录
[root@localhost ~]# mkdir my_wordpress
[root@localhost ~]# cd my_wordpress
# 编写docker-compose.yml
[root@localhost my_wordpress]# vim docker-compose.yml
version: "3.9"
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
db_data: {}
# 直接启动
docker-compose up -d
# 查看运行的容器
[root@localhost my_wordpress]# docker ps
CONTAINER ID IMAGE PORTS NAMES
cb3cbda9e023 wordpress:latest 0.0.0.0:8000->80/tcp, :::8000->80/tcp my_wordpress_wordpress_1
cb8609396f30 mysql:5.7 3306/tcp, 33060/tcp my_wordpress_db_1
# 浏览器访问8000端口

Docker Swarm
工作模式
![[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-xBPav7C5-1620462622651)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\image-20210507151231401.png)]](https://i-blog.csdnimg.cn/blog_migrate/b4ffa1c141348c65ee455c1bb132c1e4.png)
创建集群
- 初始化节点
[root@docker01 ~]# docker swarm init --advertise-addr 192.168.100.10
Swarm initialized: current node (o80gp173of72se8iz75pxwntf) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3q8vde9p6gzo9ak95p9ibfrg03zdgylhe90mqj0qczwqckyaa0-e6k6tce3mc8dgw9efec8u745h 192.168.100.10:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
# 获取令牌
docker swarm join-token manager
docker swarm join-token worker
# 加入节点(管理者manager,工作者worker)
docker swarm join
- 加入节点
# 加入worker节点
# 获取worker令牌
[root@docker01 ~]# docker swarm join-token worker
[root@docker02 ~]# docker swarm join --token SWMTKN-1-3q8vde9p6gzo9ak95p9ibfrg03zdgylhe90mqj0qczwqckyaa0-e6k6tce3mc8dgw9efec8u745h 192.168.100.10:2377
This node joined a swarm as a worker.
[root@docker03 ~]# docker swarm join --token SWMTKN-1-3q8vde9p6gzo9ak95p9ibfrg03zdgylhe90mqj0qczwqckyaa0-e6k6tce3mc8dgw9efec8u745h 192.168.100.10:2377
This node joined a swarm as a worker.
# 加入manager主节点
# 获取manager令牌
[root@docker01 ~]# docker swarm join-token manager
[root@docker04 ~]# docker swarm join --token SWMTKN-1-3q8vde9p6gzo9ak95p9ibfrg03zdgylhe90mqj0qczwqckyaa0-dfh72kdpqhr8zbrt8412niwln 192.168.100.10:2377
This node joined a swarm as a manager.
# 查看所有节点信息
[root@docker01 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
75dl8wm2buk88yd5wmqs5fgw9 Unknown Active
o80gp173of72se8iz75pxwntf * docker01 Ready Active Leader 20.10.6
ctke83w4sn6x6yay4ifh4aq5n docker02 Ready Active 20.10.6 # 空的都是worker节点
bulkh8t3wf9j5jpk9qrbtoe2z docker03 Ready Active 20.10.6
wravcvpq4by7u5hq9vr8n0tdx docker04 Ready Active Reachable 20.10.6
Raft协议
刚刚搭建了一个双主双从的集群,假设一个节点挂了!其他节点是否可用
Raft协议:保证大多数节点存活才可以用,至少要 > 1,集群至少 > 3
实验:
# 停掉主节点(docker01)
[root@docker01 ~]# systemctl stop docker
# 去看看另一个主节点(docker04)是否可用
# 双主,将主节点docker01停止(宕机),另一个主节点也不可用
[root@docker04 ~]# docker node ls
Error response from daemon: rpc error: code = DeadlineExceeded desc = context deadline exceeded
# 启动主节点docker01
[root@docker01 ~]# systemctl start docker
[root@docker01 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
75dl8wm2buk88yd5wmqs5fgw9 Down Active
o80gp173of72se8iz75pxwntf * docker01 Ready Active Reachable 20.10.6
ctke83w4sn6x6yay4ifh4aq5n docker02 Ready Active 20.10.6
bulkh8t3wf9j5jpk9qrbtoe2z docker03 Unknown Active 20.10.6
wravcvpq4by7u5hq9vr8n0tdx docker04 Ready Active Leader 20.10.6 # 发现docker04成了老大Leader
# 尝试将节点离开集群
[root@docker03 ~]# docker swarm leave
Node left the swarm.
# 状态会显示Down
[root@docker04 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
o80gp173of72se8iz75pxwntf docker01 Ready Active Reachable 20.10.6
ctke83w4sn6x6yay4ifh4aq5n docker02 Ready Active 20.10.6
bulkh8t3wf9j5jpk9qrbtoe2z docker03 Down Active 20.10.6
wravcvpq4by7u5hq9vr8n0tdx * docker04 Ready Active Leader 20.10.6
设置三台机器为管理节点
worker就是工作的,管理节点操作
# 生成管理令牌(只能在管理节点的服务器上生成)
[root@docker04 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3q8vde9p6gzo9ak95p9ibfrg03zdgylhe90mqj0qczwqckyaa0-dfh72kdpqhr8zbrt8412niwln 192.168.100.40:2377
# 将docker03加入到管理节点
[root@docker03 ~]# docker swarm join --token SWMTKN-1-3q8vde9p6gzo9ak95p9ibfrg03zdgylhe90mqj0qczwqckyaa0-dfh72kdpqhr8zbrt8412niwln 192.168.100.40:2377
This node joined a swarm as a manager.
[root@docker03 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
o80gp173of72se8iz75pxwntf docker01 Ready Active Reachable 20.10.6
ctke83w4sn6x6yay4ifh4aq5n docker02 Ready Active 20.10.6
joncfduq0qobkgwrosemydf85 * docker03 Ready Active Reachable 20.10.6
wravcvpq4by7u5hq9vr8n0tdx docker04 Ready Active Leader 20.10.6
[root@docker01 ~]# docker node rm iy63p41ikjw0nilxzgk3vwvhn
# 现在是3主1从,我们停掉一个主节点
[root@docker04 ~]# systemctl stop docker
[root@docker04 ~]# ps -ef|grep docker
root 27769 25933 0 16:28 pts/0 00:00:00 grep --color=auto doc
# 发现另外两个主节点可以正常工作
# 并且docker01又变成了老大Leader
[root@docker01 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
o80gp173of72se8iz75pxwntf * docker01 Ready Active Leader 20.10.6
ctke83w4sn6x6yay4ifh4aq5n docker02 Ready Active 20.10.6
joncfduq0qobkgwrosemydf85 docker03 Ready Active Reachable 20.10.6
wravcvpq4by7u5hq9vr8n0tdx docker04 Down Active Unreachable 20.10.6
[root@docker03 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
o80gp173of72se8iz75pxwntf docker01 Ready Active Leader 20.10.6
ctke83w4sn6x6yay4ifh4aq5n docker02 Ready Active 20.10.6
joncfduq0qobkgwrosemydf85 * docker03 Ready Active Reachable 20.10.6
wravcvpq4by7u5hq9vr8n0tdx docker04 Down Active Unreachable 20.10.6
# 如果再关闭一台主节点就会导致不可用
结论:
- 要保证集群的高可用,至少要设置三台主节点,并且要保证存活的主节点 > 1
体验
弹性,扩缩容,集群
以后告别 docker run
docker-compose up 启动一个容器! 单机!
容器 --> docker service
容器!–> 服务!–> 副本!
docker service 命令
[root@docker01 ~]# docker service --help
Usage: docker service COMMAND
Commands:
create 创建一个service
inspect 查看service细节
logs 查看service日志
ls 列出所有service
ps 列出正在运行的service服务
rm 删除一个services
rollback 回滚节点
scale 动态扩缩容
update 更新service
灰度发布! 金丝雀发布!
docker run # 容器启动! 不具有扩缩容功能!
docker service # 服务!具有扩缩容功能! 滚动更新!
# 创建一个服务
[root@docker01 ~]# docker service create --name my-nginx -p888:80 nginx
# 查看服务
[root@docker01 ~]# docker service ps my-nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
6mkcbs9nfkez my-nginx.1 nginx:latest docker02 Running Running about a minute ago
[root@docker01 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ngvtl97e1dwj my-nginx replicated 1/1 nginx:latest *:888->80/tcp
# 发现在docker01启动的容器,确跑在docker02上面(集群内部随机分布!)
[root@docker01 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@docker02 ~]# docker ps
CONTAINER ID IMAGE STATUS PORTS NAMES
cb9bf02f094c nginx:latest Up 4 minutes 80/tcp my-nginx.1.6mkcbs9nfkezz2fz50t2t24tq
[root@docker03 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@docker04 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
动态扩缩容
[root@docker01 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ngvtl97e1dwj my-nginx replicated 1/1 nginx:latest *:888->80/tcp
# 现在只有一个副本,如果业务量增多,一个副本扛不住,这个时候就需要扩容
# 更新副本数量(动态扩缩容)
[root@docker01 ~]# docker service update --replicas 3 my-nginx
my-nginx
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
# 查看副本已添加(随机分配到集群中的服务器)
[root@docker01 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ngvtl97e1dwj my-nginx replicated 3/3 nginx:latest *:888->80/tcp
# 动态扩缩容(可以动态自由更改)
[root@docker01 ~]# docker service scale my-nginx=2
my-nginx scaled to 2
overall progress: 2 out of 2 tasks
1/2: running [==================================================>]
2/2: running [==================================================>]
verify: Service converged
[root@docker01 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ngvtl97e1dwj my-nginx replicated 2/2 nginx:latest *:888->80/tcp
# 删除服务
[root@docker01 ~]# docker service rm my-nginx
my-nginx
[root@docker01 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
服务: 集群是一个整体,他的任意一个节点都可以访问;服务可以有多个副本,随时动态扩缩容实现高可用!
工作原理: 一个 service 可以创建多个副本,每一个副本就是一个 task 任务 ,任务里面跑的就是我们的容器
内部原理: 命令 --> 管理者 --> api – >scheduler(调度) --> 工作节点(创建Task容器,维护创建!)
- scheduler调度会根据内部的算法,将任务分配到某个节点
概念总结
swarm
集群的管理和编排,docker可以初始化一个swarm集群,其他节点可以加入,加入的角色有管理者(manager),工作者(worker)
node
就是一个docker节点,多个节点组成一个网络集群,节点的角色有管理者(manager),工作者(worker)
service
任务,可以在管理节点或者工作节点来运行,核心!用户可以访问!
Task
容器内的命令,细节任务
Docker Stack
大规模场景下的多服务部署和管理是一件很难的事情!
幸运的是,Docker Stack 为解决该问题而生,Docker Stack 通过提供期望状态、滚动升级、简单易用、扩缩容、健康检查等特性简化了应用的管理,这些功能都封装在一个完美的声明式模型当中。
Stack 能够在单个声明文件中定义复杂的多服务应用。Stack 还提供了简单的方式来部署应用并管理其完整的生命周期:初始化部署 -> 健康检查 -> 扩容 -> 更新 -> 回滚,以及其他功能!
单机模式下,我们可以使用 Docker Compose 来编排多个服务,而 Docker Swarm 只能实现对单个服务的简单部署。通过 Docker Stack 我们只需对已有的 docker-compose.yml 配置文件稍加改造就可以完成 Docker 集群环境下的多服务编排。
体验
- 5个应用服务:vote、redis、worker,db,result ,2个工具服务:portainer 和 visualizer
# 创建项目目录
mkdir /home/test
cd /home/test
# 编写docker-compose.yml 文件
[root@docker01 /home/test]# cat docker-compose.yml
version: "3"
services:
redis:
image: redis:alpine
ports:
- "6379"
networks:
- frontend
deploy:
replicas: 2
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
db:
image: postgres:9.4
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
deploy:
placement:
constraints: [node.role == manager]
vote:
image: dockersamples/examplevotingapp_vote:before
ports:
- 5000:80
networks:
- frontend
depends_on:
- redis
deploy:
replicas: 2
update_config:
parallelism: 2
restart_policy:
condition: on-failure
result:
image: dockersamples/examplevotingapp_result:before
ports:
- 5001:80
networks:
- backend
depends_on:
- db
deploy:
replicas: 1
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
worker:
image: dockersamples/examplevotingapp_worker
networks:
- frontend
- backend
deploy:
mode: replicated
replicas: 1
labels: [APP=VOTING]
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 120s
placement:
constraints: [node.role == manager]
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
stop_grace_period: 1m30s
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
portainer:
image: portainer/portainer
ports:
- "9000:9000"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
replicas: 1
placement:
constraints: [node.role == manager]
networks:
frontend:
backend:
volumes:
db-data:
# 在管理节点上部署服务
[root@docker01 /home/test]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
o80gp173of72se8iz75pxwntf * docker01 Ready Active Leader 20.10.6
ctke83w4sn6x6yay4ifh4aq5n docker02 Ready Active 20.10.6
joncfduq0qobkgwrosemydf85 docker03 Ready Active Reachable 20.10.6
wravcvpq4by7u5hq9vr8n0tdx docker04 Ready Active Reachable 20.10.6
[root@docker01 /home/test]# docker stack deploy example --compose-file=docker-compose.yml
Creating network example_backend
Creating network example_default
Creating network example_frontend
Creating service example_vote
Creating service example_result
Creating service example_worker
Creating service example_visualizer
Creating service example_portainer
Creating service example_redis
Creating service example_db
- 部署完成
[root@docker01 /home/test]# docker stack ls
NAME SERVICES ORCHESTRATOR
example 7 Swarm
# 列出stack(堆栈)中的服务
[root@docker01 /home/test]# docker stack services example
ID NAME MODE REPLICAS IMAGE PORTS
12cqjkgrvklq example_db replicated 0/1 postgres:9.4
vxw74xs7ho5b example_portainer replicated 1/1 portainer/portainer:latest *:9000->9000/tcp
0bvpbk5cl7mh example_redis replicated 2/2 redis:alpine *:30000->6379/tcp
v17ohn1ur0ck example_result replicated 1/1 dockersamples/examplevotingapp_result:before *:5001->80/tcp
kvqbni4lck22 example_visualizer replicated 1/1 dockersamples/visualizer:stable *:8080->8080/tcp
clliuygsdmqb example_vote replicated 2/2 dockersamples/examplevotingapp_vote:before *:5000->80/tcp
ww9wc3pu24uo example_worker replicated 0/1 dockersamples/examplevotingapp_worker:latest
- 访问
# 可以直接在浏览器访问,集群内的任何节点都可以访问
vote ip:5000
result ip:5001
portainer ip:9000
visualizer ip:8080
本文详细介绍了Docker Compose的YAML配置、安装体验,以及如何利用Docker Swarm进行集群创建和Raft协议实验,展示了服务、Task和Stack的概念,以及如何动态扩缩容和管理大规模服务部署。
2079

被折叠的 条评论
为什么被折叠?



