Docker是什么
Docker 是一个用于开发、发布和运行应用程序的开放平台。(容器运行载体或是管理引擎)
Docker 提供了在称为容器的松散隔离环境中打包和运行应用程序的能力。隔离和安全性使您可以在给定主机上同时运行多个容器。容器是轻量级的,包含运行应用程序所需的一切,因此您不需要依赖主机上安装的内容。您可以在工作时共享容器,并确保与您共享的每个人都获得以相同方式工作的相同容器。
Docker镜像(Image)就是一个只读的模板。镜像可以用来创建Docker容器,一个镜像可以创建很多容器。
仓库(Repository)是集中存放镜像文件的场所
Docker架构
Docker 使用客户端-服务器架构。Docker 客户端与 Docker 守护进程(Docker daemon)通信,后者负责构建、运行和分发 Docker 容器的繁重工作。Docker 客户端和守护进程可以在同一系统上运行,也可以将 Docker 客户端连接到远程 Docker 守护进程。Docker 客户端和守护进程使用 REST API 通过 UNIX 套接字或网络接口进行通信。另一个 Docker 客户端是 Docker Compose,它允许您使用由一组容器组成的应用程序。
命令大体图示
Docker安装
这里是Centos7操作系统服务器上安装Docker
# 1、卸载旧版本 yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-selinux \ docker-engine-selinux \ docker-engine \ # 2、安装yum工具 yum install -y yum-utils # 3、设置docker镜像源(阿里云镜像,不用外网的) yum-config-manager \ --add-repo \ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # 更新yum软件包索引 yum makecache fast # 4、安装docker相关的 docker-ce 社区 ee 企业版 yum install docker-ce docker-ce-cli containerd.io # 5、启动docker systemctl start docker # 查看是否安装成功 docker version # 做个小测试,启动hello-world镜像 # 下面的该运行情况说明:1.没有找到hello-world镜像;2.去远程仓库下载;3.‘Hello from Docker’说明安装成功 docker run hello-world
# 查看下载的镜像 docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello-world latest 9c7a54a9a43c 6 months ago 13.3kB
了解:卸载docker
# 1、卸载依赖 yum remove docker-ce docker-ce-cli containerd.io # 2、删除资源 rm -rf /var/lib/docker # /var/lib/docker docker的默认工作路径(安装路径)
找到阿里云容器镜像服务
的镜像加速器
配置镜像加速器
sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://1ffd4bti.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker
以后拉去(pull)镜像,从阿里云下载,不用从外网下载镜像了
运行镜像的流程:
为什么Docker比VM(虚拟机)快?
-
Docker有着比虚拟机更少的抽象层
-
docker利用的是宿主机的内核, vm 需要 Guest OS
所以说,新建一个容器的时候,docker不需要像虚拟机一样重新加载一个操作系统内核,避免引导。虚拟机是加载Guest OS,分钟级别
而docker是利用 宿主机的操作系统,省略了这个复杂的过程,秒级!
Docker的常用命令
帮助命令
docker version # 显示docker的版本信息 docker info # 显示docker的系统信息,包括镜像和容器的数量 docker 命令 --help # 帮助命令 ,或查询所有命令: docker --help ;查询特定命令: docker images --help
帮助文档的地址:Reference documentation | Docker Docs
镜像命令
docker images 查看所有本地的主机上的镜像
查看镜像(images)
docker images [可选项] [指定镜像]
[root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello-world latest 9c7a54a9a43c 6 months ago 13.3kB [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker images hello-world REPOSITORY TAG IMAGE ID CREATED SIZE hello-world latest 9c7a54a9a43c 6 months ago 13.3kB # 列解释 REPOSITORY 镜像的仓库源 TAG 镜像的标签 IMAGE ID 镜像的id CREATED 镜像的创建时间 SIZE 镜像的大小 # 可选项 -a, --all # 列出所有镜像(不加这个标签直接写docker images也行) -q, --quiet # 只显示镜像的id # 举例: docker images -a
搜索镜像(search)
docker search [镜像名称]
[root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker search mysql NAME DESCRIPTION STARS OFFICIAL AUTOMATED mysql MySQL is a widely used, open-source relation… 14654 [OK] mariadb MariaDB Server is a high performing open sou… 5589 [OK] percona Percona Server is a fork of the MySQL relati… 622 [OK] phpmyadmin phpMyAdmin - A web interface for MySQL and M… 902 [OK] bitnami/mysql Bitnami MySQL Docker Image 104 [OK] circleci/mysql MySQL is a widely used, open-source relation… 29 ... # 可选项,通过搜索来过滤 --filter=STARS=3000 # 搜索出来的镜像就是STARS大于3000的 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker search mysql --filter=STARS=3000 NAME DESCRIPTION STARS OFFICIAL AUTOMATED mysql MySQL is a widely used, open-source relation… 14654 [OK] mariadb MariaDB Server is a high performing open sou… 5589 [OK]
下载镜像(pull)
docker pull [镜像名称]
# 下载镜像 docker pull 镜像名[:tag] [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker pull mysql Using default tag: latest # 如果不写tag,默认就是latest latest: Pulling from library/mysql 72a69066d2fe: Pull complete # 分层下载,docker image的核心 联合文件系统 93619dbc5b36: Pull complete 99da31dd6142: Pull complete ... Digest: sha256:e9027fe4d91c0153429607251656806cc784e914937271037f7738bd5b8e7709 Status: Downloaded newer image for mysql:latest docker.io/library/mysql:latest #真实地址 # 等价于它 docker pull mysql docker pull docker.io/library/mysql:latest #指定版本 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker pull mysql:5.7 5.7: Pulling from library/mysql 72a69066d2fe: Already exists ... 0ceb82207cd7: Pull complete ... Digest: sha256:f2ad209efe9c67104167fc609cca6973c8422939491c9345270175a300419f94 Status: Downloaded newer image for mysql:5.7 docker.io/library/mysql:5.7
提交镜像(commit)
docker commit
:提交容器成为一个新的副本,和git命令相似
docker commit -m="提交的描述信息" -a="作者" 容器id 目标镜像名:[TAG]
举例:
# 1、启动一个默认的tomcat [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -it -p 8080:8080 tomcat # exit退出前台 # 进入容器中 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker exec -it 2f13b92f1150 /bin/bash # 2、发现这个默认的tomcat 是没有webapps应用,镜像的原因,官方的镜像默认webapps下面是没有文件的,自己拷贝进去了基本的文件 root@2f13b92f1150:/usr/local/tomcat# cp -r webapps.dist/* webapps # 3、提交成为新的镜像 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker commit -a="zhangzhendong" -m="tomcat webapps has application" 2f13b92f1150 tomcat02:1.0 # 查看可以看出tomcat02比tomcat多了一点大小 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE tomcat02 1.0 64b996ff638e 18 seconds ago 684MB tomcat latest fb5657adc892 23 months ago 680MB
提交镜像可以当作学习VM时候的快照,如果想要保存当前容器的状态,就可以通过commit来提交,获得一个镜像。
删除镜像(rmi)
docker rmi [镜像名称]
删除指定镜像
docker rmi -f $(docker images -aq)
删除所有镜像
# 递归删除所有镜像: docker rmi -f $(docker images -aq) [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker rmi -f $(docker images -aq) Untagged: mysql:latest Untagged: mysql@sha256:e9027fe4d91c0153429607251656806cc784e914937271037f7738bd5b8e7709 Deleted: sha256:3218b38490cec8d31976a40b92e09d61377359eab878db49f025e5d464367f3b ...
容器命令
有了镜像才可以创建容器,linux,下载一个centos镜像来学习
docker pull centos
新建并启动容器(pull)
docker run [可选参数] [镜像名称]
新建容器并启动
# 参数说明 --name=“Name” 容器名字 tomcat01 tomcat02,用来区分容器 -d 后台方式运行 后台线程 -it 使用交互方式运行,进入容器查看内容 是前台线程? -p ip:主机端口:容器端口 -p 主机端口:容器端口(常用) -p 容器端口 容器端 -p :随机指定端口 # 拉取镜像 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker pull centos Using default tag: latest ... # 测试,启动并进入容器 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -it centos /bin/bash # 启动进入容器, [root@348e290df749 /]# ls # 查看容器内的centos,基础版本,很多命令不完善 bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var # 从容器中退回主机 [root@348e290df749 /]# exit exit
# 其他使用 docker run -d --name nginx01 -p 3344:80 nginx
查看容器(ps)
docker ps [选项]
# 列出当前正在运行的容器 docker ps # 列出当前正在运行的容器+带出历史运行过的容器 docker ps -a # 选项: -a # 列出当前正在运行的容器+带出历史运行过的容器 -all -n=? # 显示最近创建的容器 -q # 只显示容器的编号 # 测试 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a8423c2f9a9a centos "/bin/bash" 50 seconds ago Exited (127) 28 seconds ago lucid_chatelet 348e290df749 centos "/bin/bash" 23 minutes ago Exited (0) 22 minutes ago cool_torvalds ... [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps -aq a8423c2f9a9a 348e290df749 ...
退出容器
exit # 直接容器停止并退出 Ctrl + P + Q # 容不停止退出 # 测试 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -it centos /bin/bash [root@bc5b77febb4e /]# [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps \CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bc5b77febb4e centos "/bin/bash" 40 seconds ago Up 39 seconds quirky_keldysh
进入正在运行的容器(exec/attach)
# 我们通常容器都是使用后台方式运行的,需要进入容器,修改一些配置 # 方式一 docker exec -it 容器id [bashShell] # 测试 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cc5e9fc03c24 centos "/bin/sh -c 'while t…" 31 minutes ago Up 31 minutes epic_lichterman [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker exec -it cc5e9fc03c24 /bin/bash [root@cc5e9fc03c24 /]# ls bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var [root@cc5e9fc03c24 /]# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 06:16 ? 00:00:00 /bin/sh -c while true;do echo zhangzhen;sleep 1;done root 1968 0 0 06:49 pts/0 00:00:00 /bin/bash root 2206 1 0 06:52 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1 root 2207 1968 0 06:52 pts/0 00:00:00 ps -ef # 方式二 docker attach 容器id # 测试 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker attach cc5e9fc03c24 正在执行当前代码... # 若是间断输出数据不太好退出 # docker exec # 进入容器后开启一个新的终端,可以在里面操作(常用) # docker attach # 进入容器正在执行的终端,不会启动新的进程
删除容器(rm)
docker rm 容器id
删除指定的容器,不能删除正在运行的容器,如果要强制删除 rm -f docker rm -f $(docker ps -aq)
删除所有的容器 或者是 docker ps -a -q|xargs docker rm
# 测试 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bc5b77febb4e centos "/bin/bash" 10 minutes ago Up 10 minutes # 已经运行10分钟了 quirky_keldysh 348e290df749 centos "/bin/bash" 41 minutes ago Exited (0) 41 minutes ago #41分钟前终止运行 cool_torvalds 4f3150a998a8 centos "/bin、bash" 41 minutes ago Created stoic_spence e176ce41beaf feb5d9fea6a5 "/hello" 22 hours ago Exited (0) 22 hours ago crazy_yonath # 删除已经终止的容器 成功 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker rm 348e290df749 348e290df749 # 删除正在运行的容器 失败 除非加-f选项 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker rm bc5b77febb4e Error response from daemon: You cannot remove a running container bc5b77febb4e82a63bf6abf5c851c0d14eae5f902746d4e902e4d5daf1b11bbe. Stop the container before attempting removal or force remove
启动和停止容器
docker start 容器id # 启动容器 docker restart 容器id # 重启容器 docker stop 容器id # 停止当前正在运行的容器 docker kill 容器id # 强制停止当前容器 # 测试 开始没有容器,新建一个 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -it centos /bin/bash [root@d9f2d1abc54d /]# exit exit # 查看容器状态 发现已经停止了 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9f2d1abc54d centos "/bin/bash" 32 seconds ago Exited (0) 5 seconds ago quirky_hypatia # 启动容器 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker start d9f2d1abc54d d9f2d1abc54d # 发现容器开始运行了 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9f2d1abc54d centos "/bin/bash" About a minute ago Up 5 seconds quirky_hypatia
注意:容器同时只能管理一个进程,如果这个进程结束了容器就退出了,但是不表示容器只能运行一个进程(其他进程可在后台运行),但是要使容器不退出必须要有一个进程在前台执行(docker容器发现没有应用,就会自动停止)
批量停止启动容器:
# 启动所有容器 docker start $(docker ps -a | awk '{ print $1}' | tail -n +2) # 停止所有容器 docker stop $(docker ps -a | awk '{ print $1}' | tail -n +2)
举例:
如果使用docker run -d centos
命令创建容器以后台方式运行,那么容器一直都是 退出状态Exited
,启动容器也是Exit
状态
使用docker run -it centos /bin/bash
命令创建容器交互式运行,那么当从容器中退回到主机时,容器状态是Exited
,然后执行docker start 该容器id
再查看容器状态成了Up ...
查看docker
查看日志(logs)
docker logs -f -t --tail
# 自己编写一段shell脚本 容器启动后执行 输出一次zhangzhen,睡一秒,再做一次(每隔一秒输出一次zhangzhen) [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -d centos /bin/sh -c "while true;do echo zhangzhen;sleep 1;done" cc5e9fc03c2483464fd7fdf59e08700484c433a2861b1fcc75b5a4957ee30b68 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cc5e9fc03c24 centos "/bin/sh -c 'while t…" 4 seconds ago Up 3 seconds epic_lichterman # 显示日志 -tf # 显示日志 --tail number # 要显示日志条数 # 测试 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker logs -tf --tail 10 cc5e9fc03c24 2023-12-01T06:19:06.082599561Z zhangzhen 2023-12-01T06:19:07.084247236Z zhangzhen 2023-12-01T06:19:06.082599561Z zhangzhen 2023-12-01T06:19:07.084247236Z zhangzhen 2023-12-01T06:19:06.082599561Z zhangzhen 2023-12-01T06:19:07.084247236Z zhangzhen 2023-12-01T06:19:06.082599561Z zhangzhen 2023-12-01T06:19:07.084247236Z zhangzhen 2023-12-01T06:19:06.082599561Z zhangzhen 2023-12-01T06:19:07.084247236Z zhangzhen ... 然后每隔一秒输出一次
查看容器中的进程信息(top)
docker top 容器id
[root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker top cc5e9fc03c24 UID PID PPID C STIME TTY TIME CMD root 4099704 4099684 0 14:16 ? 00:00:00 /bin/sh -c while true;do echo zhangzhen;sleep 1;done root 4153039 4099704 0 14:32 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1
查看镜像的元数据(inspect)
docker inspect 容器id
# 测试 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker inspect cc5e9fc03c24 [ { "Id": "cc5e9fc03c2483464fd7fdf59e08700484c433a2861b1fcc75b5a4957ee30b68", "Created": "2023-12-01T06:16:16.59620772Z", "Path": "/bin/sh", "Args": [ "-c", "while true;do echo zhangzhen;sleep 1;done" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 4099704, "ExitCode": 0, "Error": "", "StartedAt": "2023-12-01T06:16:16.793193302Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:5d0da3dc976460b72c77d94c8a1ad043720b0416bfc16c52c45d4847e53fadb6", "ResolvConfPath": "/var/lib/docker/containers/cc5e9fc03c2483464fd7fdf59e08700484c433a2861b1fcc75b5a4957ee30b68/resolv.conf", "HostnamePath": "/var/lib/docker/containers/cc5e9fc03c2483464fd7fdf59e08700484c433a2861b1fcc75b5a4957ee30b68/hostname", "HostsPath": "/var/lib/docker/containers/cc5e9fc03c2483464fd7fdf59e08700484c433a2861b1fcc75b5a4957ee30b68/hosts", "LogPath": "/var/lib/docker/containers/cc5e9fc03c2483464fd7fdf59e08700484c433a2861b1fcc75b5a4957ee30b68/cc5e9fc03c2483464fd7fdf59e08700484c433a2861b1fcc75b5a4957ee30b68-json.log", "Name": "/epic_lichterman", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": null, "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "default", "PortBindings": {}, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "ConsoleSize": [ 42, 187 ], "CapAdd": null, "CapDrop": null, "CgroupnsMode": "host", "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": null, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 0, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": [], "BlkioDeviceWriteBps": [], "BlkioDeviceReadIOps": [], "BlkioDeviceWriteIOps": [], "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DeviceRequests": null, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "ReadonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/131d9e4fac1a0e535173342f763446890e86aae78aaa0b6a849309c2b499a450-init/diff:/var/lib/docker/overlay2/769bb0aed3808ef5347e619f6a54277e460808b6aa621b92ed062bdee6afdda5/diff", "MergedDir": "/var/lib/docker/overlay2/131d9e4fac1a0e535173342f763446890e86aae78aaa0b6a849309c2b499a450/merged", "UpperDir": "/var/lib/docker/overlay2/131d9e4fac1a0e535173342f763446890e86aae78aaa0b6a849309c2b499a450/diff", "WorkDir": "/var/lib/docker/overlay2/131d9e4fac1a0e535173342f763446890e86aae78aaa0b6a849309c2b499a450/work" }, "Name": "overlay2" }, "Mounts": [], "Config": { "Hostname": "cc5e9fc03c24", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": [ "/bin/sh", "-c", "while true;do echo zhangzhen;sleep 1;done" ], "Image": "centos", "Volumes": null, "WorkingDir": "", "Entrypoint": null, "OnBuild": null, "Labels": { "org.label-schema.build-date": "20210915", "org.label-schema.license": "GPLv2", "org.label-schema.name": "CentOS Base Image", "org.label-schema.schema-version": "1.0", "org.label-schema.vendor": "CentOS" } }, "NetworkSettings": { "Bridge": "", "SandboxID": "40b2c17d30e4c547ac239d0fdfb75895b0b1737782209e71e07ffa663edcd001", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "/var/run/docker/netns/40b2c17d30e4", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "d6f1da40e705c865b18eeeeb90e562f6ce5097e674ad1759e2bf18b12c63dd2e", "Gateway": "172.17.0.1", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "172.17.0.2", "IPPrefixLen": 16, "IPv6Gateway": "", "MacAddress": "02:42:ac:11:00:02", "Networks": { "bridge": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "83419e7eb099022d963bd68462448a0c572ce0d0bec95e1df5f8e6eaac09e4e4", "EndpointID": "d6f1da40e705c865b18eeeeb90e562f6ce5097e674ad1759e2bf18b12c63dd2e", "Gateway": "172.17.0.1", "IPAddress": "172.17.0.2", "IPPrefixLen": 16, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:11:00:02", "DriverOpts": null } } } } ]
查看进程的系统资源占用(stats)
docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS a191132b3adc goofy_shamir 0.00% 900KiB / 1.678GiB 0.05% 936B / 0B 0B / 0B 1 82bf23d14271 tomcat01 0.03% 139.6MiB / 1.678GiB 8.12% 5.98kB / 35.9kB 14.4MB / 0B 25
拷贝
从容器内拷贝文件到主机(cp)
docker cp [选项] 镜像id:镜像中文件的路径 要拷贝到的主机的路径
# 开始没有运行的容器 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE mysql latest 3218b38490ce 23 months ago 516MB centos latest 5d0da3dc9764 2 years ago 231MB # 创建并启动一个容器 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -it centos /bin/bash [root@7a4f6888691d /]# [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7a4f6888691d centos "/bin/bash" 20 seconds ago Up 19 seconds heuristic_murdock # 在主机的home路径下 新建一个zhangzhen.java文件 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# cd /home [root@iZ0jl1aa09m86u22kxbhvhZ home]# ls [root@iZ0jl1aa09m86u22kxbhvhZ home]# touch zhangzhen.java [root@iZ0jl1aa09m86u22kxbhvhZ home]# ls zhangzhen.java # 进入镜像中,在其home路径下新建一个叫test.java的文件 [root@iZ0jl1aa09m86u22kxbhvhZ home]# docker attach 7a4f6888691d [root@7a4f6888691d /]# cd /home [root@7a4f6888691d home]# ls [root@7a4f6888691d home]# touch test.java [root@7a4f6888691d home]# exit exit [root@iZ0jl1aa09m86u22kxbhvhZ home]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@iZ0jl1aa09m86u22kxbhvhZ home]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7a4f6888691d centos "/bin/bash" 4 minutes ago Exited (0) 25 seconds ago heuristic_murdock # 从容器内拷贝文件到主机 [root@iZ0jl1aa09m86u22kxbhvhZ home]# docker cp 7a4f6888691d:/home/test.java /home Successfully copied 1.54kB to /home [root@iZ0jl1aa09m86u22kxbhvhZ home]# ls test.java zhangzhen.java # 成功了
拷贝是一个手动过程,未来可以用一个-v卷的技术实现文件夹之间的互通同步
常用镜像
安装nginx
-
docker搜索nginx
-
下载nginx
[root@iZ0jl1aa09m86u22kxbhvhZ home]# docker search nginx NAME DESCRIPTION STARS OFFICIAL AUTOMATED nginx Official build of Nginx. 19299 [OK] unit Official build of NGINX Unit: Universal Web … 18 [OK] nginxinc/nginx-unprivileged Unprivileged NGINX Dockerfiles 135 nginx/nginx-ingress NGINX and NGINX Plus Ingress Controllers fo… 86 nginx/nginx-prometheus-exporter NGINX Prometheus Exporter for NGINX and NGIN… 33 nginxinc/nginx-s3-gateway Authenticating and caching gateway based on … 3 nginx/unit This repository is retired, use the Docker o… 64 nginx/nginx-ingress-operator NGINX Ingress Operator for NGINX and NGINX P… 2 nginxinc/amplify-agent NGINX Amplify Agent docker repository 1 nginx/nginx-quic-qns NGINX QUIC interop 1 nginxinc/ingress-demo Ingress Demo 4 nginxproxy/nginx-proxy Automated Nginx reverse proxy for docker con… 119 nginxproxy/acme-companion Automated ACME SSL certificate generation fo… 127 bitnami/nginx Bitnami nginx Docker Image 180 [OK] bitnami/nginx-ingress-controller Bitnami Docker Image for NGINX Ingress Contr… 32 [OK] ubuntu/nginx Nginx, a high-performance reverse proxy & we… 103 nginxinc/nginmesh_proxy_debug 0 nginxproxy/docker-gen Generate files from docker container meta-da… 14 nginxinc/mra-fakes3 0 kasmweb/nginx An Nginx image based off nginx:alpine and in… 6 nginxinc/ngx-rust-tool 0 rancher/nginx-ingress-controller 11 nginxinc/mra_python_base 0 nginxinc/nginmesh_proxy_init 0 [root@iZ0jl1aa09m86u22kxbhvhZ home]# docker pull nginx Using default tag: latest latest: Pulling from library/nginx a2abf6c4d29d: Pull complete a9edb18cadd1: Pull complete 589b7251471a: Pull complete 186b1aaa4aa6: Pull complete b4df32aa5a72: Pull complete a0bcbecc962e: Pull complete Digest: sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31 Status: Downloaded newer image for nginx:latest docker.io/library/nginx:latest [root@iZ0jl1aa09m86u22kxbhvhZ home]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx latest 605c77e624dd 23 months ago 141MB mysql latest 3218b38490ce 23 months ago 516MB centos latest 5d0da3dc9764 2 years ago 231MB # 新建启动nginx容器(后台线程) 指定主机端口号3344 [root@iZ0jl1aa09m86u22kxbhvhZ home]# docker run -d --name nginx01 -p 3344:80 nginx f885748bf2c47848915c834f39d24043bfb107dafb13aba2ecc2883f6b13f607 [root@iZ0jl1aa09m86u22kxbhvhZ home]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f885748bf2c4 nginx "/docker-entrypoint.…" 11 seconds ago Up 10 seconds 0.0.0.0:3344->80/tcp, :::3344->80/tcp nginx01 # 向该主机的3344端口发起网络请求 [root@iZ0jl1aa09m86u22kxbhvhZ home]# curl localhost:3344 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
外网访问Docker容器:
-
Linux里有个防火墙,还有docker服务,docker里有很多服务
-
容器由镜像创建,相当于一个小linux环境,里面也有一个防火墙
-
外网要访问linux,如果docker镜像源是阿里云的,那么需要在访问前额外加一层阿里云安全组
-
阿里云安全组开放端口号(默认应该都开放不设置的话),linux(服务器)防火墙也要开启,宿主机(服务器)和其内部的Docker容器要打通联系,上方命令
docker run -d --name nginx01 -p 3344:80 nginx
就是实现这个操作 -
现在外网访问(上方例子是自己主机访问自己的3344接口的
curl localhost:3344
)远程服务器经过安全组,经过服务器防火墙,服务器的3344与容器的80打通关系了,就访问到了容器
之前出现的问题:
我启动容器后在公网访问不到:http://8.130.52.167:3344/
原因:没有打开阿里云安全组:
阿里云默认打开22端口号和3389端口号,我们需要设置一下允许访问的端口号,这里我设置的是所有端口号都开放给所有ip(主机),设置是:源:0.0.0.0/0
思考问题:我们以后要部署项目,如果每次都要进入容器是不是十分麻烦?我要是可以在容器外部提供一个映射路径,webapps,
我们在外部放置项目,就自动同步到内部就好了!
安装tomcat
# 官方使用 之前实践启动都是后台,停止了容器之后还能查到 下面官方的使用 用完就删除 docker run -it --rm tomcat:9.0 # 安装tomcat镜像 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker pull tomcat ... # ( 新建并启动一个tomcat镜像 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -d -p 3355:8080 --name tomcat01 tomcat ... [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 82bf23d14271 tomcat "catalina.sh run" 9 seconds ago Up 8 seconds 0.0.0.0:3355->8080/tcp, :::3355->8080/tcp tomcat01 # ) # 测试本地连接没问题 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# curl localhost:3355 ... # 但公网请求不成功 # 发现问题: linux命令少了;没有webapps 阿里云镜像的原因。默认是最小的镜像,所有不必要的都剔除掉 保证最小可运行环境
解决方法(在webapps里添加文件):
# 进入镜像中 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker exec -it tomcat01 /bin/bash # 可以看出当前镜像中有webapps文件夹和webapps.dist文件夹 root@82bf23d14271:/usr/local/tomcat# ls -l total 132 -rw-r--r-- 1 root root 18994 Dec 2 2021 BUILDING.txt -rw-r--r-- 1 root root 6210 Dec 2 2021 CONTRIBUTING.md -rw-r--r-- 1 root root 60269 Dec 2 2021 LICENSE -rw-r--r-- 1 root root 2333 Dec 2 2021 NOTICE -rw-r--r-- 1 root root 3378 Dec 2 2021 README.md -rw-r--r-- 1 root root 6905 Dec 2 2021 RELEASE-NOTES -rw-r--r-- 1 root root 16517 Dec 2 2021 RUNNING.txt drwxr-xr-x 2 root root 4096 Dec 22 2021 bin drwxr-xr-x 1 root root 22 Dec 1 13:36 conf drwxr-xr-x 2 root root 4096 Dec 22 2021 lib drwxrwxrwx 1 root root 80 Dec 1 13:36 logs drwxr-xr-x 2 root root 159 Dec 22 2021 native-jni-lib drwxrwxrwx 2 root root 30 Dec 22 2021 temp drwxr-xr-x 2 root root 6 Dec 22 2021 webapps drwxr-xr-x 7 root root 81 Dec 2 2021 webapps.dist drwxrwxrwx 2 root root 6 Dec 2 2021 work # webapps文件夹中没有文件 root@82bf23d14271:/usr/local/tomcat# cd webapps root@82bf23d14271:/usr/local/tomcat/webapps# ls 空 # webapps.dist中有文件 root@82bf23d14271:/usr/local/tomcat/webapps# cd .. root@82bf23d14271:/usr/local/tomcat# cd webapps.dist root@82bf23d14271:/usr/local/tomcat/webapps.dist# ls ROOT docs examples host-manager manager # 将webapps.dist文件夹中的所有内容 复制到 webapps文件夹中 可以看到成功了 root@82bf23d14271:/usr/local/tomcat/webapps.dist# cd .. root@82bf23d14271:/usr/local/tomcat# cp -r webapps.dist/* webapps root@82bf23d14271:/usr/local/tomcat# cd webapps root@82bf23d14271:/usr/local/tomcat/webapps# ls ROOT docs examples host-manager manager
安装es+kibana
拉取镜像elasticsearch
[root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker pull elasticsearch Unable to find image 'elasticsearch:latest' locally latest: Pulling from library/elasticsearch 05d1a5232b46: Pull complete ...
启动elasticsearch:
docker run -it --name elastic -p 9200:9200 elasticsearch
报错:
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x000000008a660000, 1973026816, 0) failed; error='Cannot allocate memory' (errno=12)
原因:es十分耗内存,内存不足
解决办法:添加环境变量减少其占的内存
[root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -d --name elastic01 -p 9200:9200 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms64m -Xmx512m" elasticsearch caa82fec6990fcacc0697ec559e43c898690488551aa17f4f0bb5d6878ac4881 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES caa82fec6990 elasticsearch "/docker-entrypoint.…" 9 seconds ago Up 8 seconds 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9300/tcp elastic01 # 可以看到: 内寸占比已经下降了230.3MiB / 1.678GiB [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker stats CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS caa82fec6990 elastic01 0.15% 230.3MiB / 1.678GiB 13.40% 836B / 0B 60.4MB / 30.7kB 36
测试连接:http://8.130.52.167:9200/
安装portainer
Docker图形化界面管理工具!提供一个后台面板供我们操作!
# 拉取镜像 docker pull portainer/portainer # 创建启动容器 docker run -d -p 8088:9090 \ --restart=always -v /var/run/docker.sock:/var/run/docker.sock --privileged=true portainer/portainer
阿里云安全组常见默认端口:
22:SSH 远程连接服务 80:代表HTTP请求的默认端口,超文本传输协议,其他还用的多的有8080/3128/8081/9080 443: HTTPS服务器请求 1521:Oracle数据库 3306:MySQL 6379:Redis 3389:windows RDP远程登录
开启服务器外网访问:http(80)
但是访问:http://8.130.52.167:8080/ 不成功
容器数据卷
起源
docker的理念回顾
将应用和环境打包成一个镜像
如果数据都在容器中,那么我们容器删除,数据就会丢失;需求:数据可以持久化
如果数据在MySql中把容器删了就成删库跑路了!需求:MySql数据可以存储在本地
容器之间可以有一个数据共享的技术!Docker容器中产生的数据,同步到本地!这就是卷技术,将我们容器内的目录挂载到Linux上
挂载
挂载实现数据双向绑定:
手动挂载
# -v挂载命令 将容器内的/home文件夹挂载到linux服务器上的/home/ceshi文件夹下 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -it -v /home/ceshi:/home centos /bin/bash # 查看镜像元数据,挂载成功! [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker inspect 7f1b7f3a6414 [ ... "Mounts": [ { "Type": "bind", "Source": "/home/ceshi", "Destination": "/home", "Mode": "", "RW": true, "Propagation": "rprivate" } ], ... ]
数据同步:
实战-MySQL数据同步
运行MySql
docker pull mysql:5.7 # 运行容器,需要做数据挂载! # 安装启动mysql , 需要配置密码的! # 启动我们的mysql, 注意:这里容器的端口号一定要写3306,不然远程连接不到该服务器 -d 后台运行 -p 端口映射 -v 卷挂载 -e 环境配置 --name 容器名字 docker run -d -p 3300:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --name mysql01 mysql:5.7 # 启动成功后,我们在本地使用 数据库可视化面板 来接测试一下 # 本机数据库可视化面板-连接到服务器的3300 --- 3306 和容器的3306映射,但是我第一次尝试总是连接不上!
客户端访问mysql的时候只有3306端口才能访问
之前容器端口号没写3306远程连接不上搜了很多解决办法就比如下面这个,但现在看来不是这个的问题
问题:可能是加密规则问题
#1、进入容器 docker exec -it 容器id /bin/bash #2、进入mysql客户端 mysql -u root -p123456 (123456 是密码) #3.查看用户状态 发现加密规则不是mysql_native_password select host,user,plugin,authentication_string from mysql.user; #4.修改 ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY '
主机mysql可视化软件navicat(客户端)连接mysql(服务端)
在里面新建一个数据库,名叫:aaaaaaaaaaaaaaa
[root@iZ0jl1aa09m86u22kxbhvhZ lib]# cd /home/mysql/data # 在服务器挂载的目录里能找到对应的数据库 [root@iZ0jl1aa09m86u22kxbhvhZ data]# ls aaaaaaaaa ca-key.pem client-cert.pem ib_buffer_pool ib_logfile0 ibtmp1 performance_schema public_key.pem server-key.pem auto.cnf ca.pem client-key.pem ibdata1 ib_logfile1 mysql private_key.pem server-cert.pem sys # 测试数据卷的安全性:如果删除容器,主机(服务器)挂载目录里的文件会消失吗? [root@iZ0jl1aa09m86u22kxbhvhZ data]# docker stop mysql01 mysql01 [root@iZ0jl1aa09m86u22kxbhvhZ data]# docker rm mysql01 mysql01 # 可以看到即使容器删除了,里面有过的内容也保存下来了,非常安全; 但容器都没了,客户端是肯定连接不上了 [root@iZ0jl1aa09m86u22kxbhvhZ data]# ls aaaaaaaaa ca-key.pem client-cert.pem ib_buffer_pool ib_logfile0 mysql private_key.pem server-cert.pem sys auto.cnf ca.pem client-key.pem ibdata1 ib_logfile1 performance_schema public_key.pem server-key.pem
具名和匿名挂载
自动挂载
挂载都不指定主机路径的
# 匿名挂载 -v 容器内路径 docker run -d -P --name nginx01 -v /ect/nginx nginx #查看所有的volume的命令参数 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker volume --help Commands: create Create a volume inspect Display detailed information on one or more volumes # 查看一个卷或多个卷的详细信息 ls List volumes prune Remove unused local volumes rm Remove one or more volumes # 查看所有的volume的情况 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker volume ls DRIVER VOLUME NAME local 1dffe2115a29d79e44848ff4a5d62f30393eef78280fedaa86224eb7bc58f391 ... # 这里发现,这种都是匿名挂载,我们在-v 只写了容器内的路径,没有写容器外的路径 # 具名挂载 -P是随机端口号(懒得设置) [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -d -P --name nginx02 -v juming-nginx:/etc/nginx nginx 14dd633564f03a397652cd4d08dc38d75bac0906d40cae0caa3ded6935dd27b8 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker volume ls DRIVER VOLUME NAME local 1dffe2115a29d79e44848ff4a5d62f30393eef78280fedaa86224eb7bc58f391 local 31f6b3fb1545ab2e720803ce63269164565f20b7bd3f5879a0db1050370660fb ... local juming-nginx、 # 通过 -v 卷名:容器内路径 # 查看一下这个卷 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker volume inspect juming-nginx [ { "CreatedAt": "2023-12-07T10:55:12+08:00", "Driver": "local", "Labels": null, "Mountpoint": "/var/lib/docker/volumes/juming-nginx/_data", # 能够看到所有未指定挂载主机路径的 容器路径内的数据都统一放置到这个文件夹下 "Name": "juming-nginx", "Options": null, "Scope": "local" } ] [root@iZ0jl1aa09m86u22kxbhvhZ ~]# ls /var/lib/docker/volumes 081ae44dae734a08596b06972b314a982cefb3ebd441175cbff49a3838bad834 79f20ec3ae0578a26730d90d4b661ddb44c89781745ec5c43318e232cb020ec3 31f6b3fb1545ab2e720803ce63269164565f20b7bd3f5879a0db1050370660fb backingFsBlockDev 673941464dde39ece2500499bc4c0e4125d8ceeb5778c1481f2b21f40bd4d5c7 juming-nginx 74b7fea1d54fe750f8bd5af5cb9b288646b497cfd3a08cd54879b628c99f80c1 metadata.db ...
所有的docker容器内的卷,没有指定目录的情况下都是在/var/lib/docker/volumes/xxxx/_data
我们通过具名挂载可以方便的找到我们的一个卷,大多数情况在使用的具名挂载
# 如何确定是具名挂载还是匿名挂载,还是指定路径挂载 -v 容器内路径 # 匿名挂载 -v 卷名:容器内路径 # 具名挂载 -v /宿主机路径:容器内路径 # 指定路径挂载
拓展:
# 通过 -v 容器内路径: ro rw 改变读写权限 ro readonly # 只读 rw readwrite # 可读可写 # 一旦这个设置了容器权限,容器对我们挂载出来的内容就有限定了 docker run -d -P --name nginx02 -v juming-nginx:/etc/nginx:ro nginx docker run -d -P --name nginx02 -v juming-nginx:/etc/nginx:rw nginx # ro 只要看到ro就说明这个路径只能通过宿主机来操作,容器内部是无法操作的
容器同步
数据卷容器
利用指定的容器给别的容器同步数据
先启动两个容器测试能不能同步数据:
[root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE tomcat02 1.0 64b996ff638e 3 days ago 684MB nginx latest 605c77e624dd 23 months ago 141MB tomcat latest fb5657adc892 23 months ago 680MB mysql 5.7 c20987f18b13 23 months ago 448MB mysql latest 3218b38490ce 23 months ago 516MB zzd/centos 1.0 bcb7b9d02259 2 years ago 231MB centos latest 5d0da3dc9764 2 years ago 231MB portainer/portainer latest 580c0e4e98b0 2 years ago 79.1MB elasticsearch latest 5acf0e8da90b 5 years ago 486MB # 启动自己构建镜像的容器 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -it --name docker01 zzd/centos:1.0 [root@088546525e26 /]# ls bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var volume01 volume02 # 再启动一个相同容器将其数据与前一个容器数据同步(因为容器相同,挂载的文件也都相同,同步好理解) [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -it --name docker02 --volumes-from docker01 zzd/centos:1.0 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 399013d816c6 zzd/centos:1.0 "/bin/sh -c /bin/bash" About a minute ago Up 5 seconds docker02 088546525e26 zzd/centos:1.0 "/bin/sh -c /bin/bash" 9 minutes ago Up 3 minutes docker01 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker attach 399013d816c6 [root@399013d816c6 /]# ls bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var volume01 volume02 [root@399013d816c6 /]# cd volume01 [root@399013d816c6 volume01]# touch docker01file # 此时在同步容器中就能找到docker01file,接着我又操作在同步容器中创建docker02file,下面再此容器中也能找到 [root@399013d816c6 volume01]# ls docker01file docker02file
多个mysql实现数据共享
docker run -d -p 3310:3306 -v /etc/mysql/conf.d -v /var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --name mysql01 mysql:5.7 docker run -d -p 3311:3307 -e MYSQL_ROOT_PASSWORD=123456 --name mysql02 --volumes-from mysql01 mysql:5.7 #此时两个容器数据同步
结论:
容器之间配置信息的传递,数据卷容器的声明周期一直持续到没有容器使用为止
但是一旦你持久化到了本地,这个时候,本地的数据是不会删除的!
Dockerfile
根据Dockerfile构建镜像
Dockerfile就是用来构建docker镜像的构建文件!命令脚本!体验一下
通过这个脚本可以生成镜像,镜像是一层一层的,脚本一个个的命令,每个命令都是一层!
现在是生成一个镜像,在创建镜像的时候就挂载它
# 创建一个dockerfile文件,名字可以随机 建议 Dockerfile # 文件中的内容 指令(大写) 参数 FROM centos VOLUME ["volume01","volume02"] CMD echo "----end----" CMD /bin/bash # 这里的每个命令,就是镜像的一层,上面内容解释: # 现在我所处的路径和刚刚创建的脚本文件 [root@iZ0jl1aa09m86u22kxbhvhZ docker-test-volume]# pwd /home/docker-test-volume [root@iZ0jl1aa09m86u22kxbhvhZ docker-test-volume]# ls dockerfile1 # 构建自己的镜像 [root@iZ0jl1aa09m86u22kxbhvhZ docker-test-volume]# docker build -f /home/docker-test-volume/dockerfile1 -t zzd/centos:1.0 . [+] Building 0.1s (5/5) FINISHED docker:default => [internal] load build definition from dockerfile1 0.0s => => transferring dockerfile: 121B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/centos:latest 0.0s => [1/1] FROM docker.io/library/centos 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:bcb7b9d022595d87e4f374a55119c8e2e4933dd784edc7ca0ca447c4d8abff2f 0.0s => => naming to docker.io/zzd/centos:1.0 0.0s [root@iZ0jl1aa09m86u22kxbhvhZ docker-test-volume]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE centos latest 5d0da3dc9764 2 years ago 231MB zzd/centos 1.0 bcb7b9d02259 2 years ago 231MB ...
测试是否真的新建镜像就执行脚本命令 实现自动挂载
# 创建一个自己构建的镜像的容器 并在构建的(自动实现挂载文件夹)文件夹下创建一个文件,那么应该已经被挂载的主机路径内肯定也有该文件 [root@iZ0jl1aa09m86u22kxbhvhZ docker-test-volume]# docker run -it bcb7b9d02259 /bin/bash [root@3ec2efc4712e /]# ls bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var volume01 volume02 [root@3ec2efc4712e /]# cd v bash: cd: v: No such file or directory [root@3ec2efc4712e /]# cd volume01 [root@3ec2efc4712e volume01]# touch container.txt [root@3ec2efc4712e volume01]# ls container.txt [root@3ec2efc4712e volume01]# #--- 查看该容器元数据 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker inspect 3ec2efc4712e "Mounts": [ { "Type": "volume", "Name": "1321fdb396fca8603c72fd340571dce9f0f309c5d32be14deec66b76d8fe67d2", "Source": "/var/lib/docker/volumes/1321fdb396fca8603c72fd340571dce9f0f309c5d32be14deec66b76d8fe67d2/_data", "Destination": "volume01", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" }, { "Type": "volume", "Name": "ab11794d133c267f5a672e59fa032e5d3b1d02b271e2f6908e19872d6e47fd07", "Source": "/var/lib/docker/volumes/ab11794d133c267f5a672e59fa032e5d3b1d02b271e2f6908e19872d6e47fd07/_data", "Destination": "volume02", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ]
以后这种挂载方式会使用比较多
DockerFile构建过程
基础知识:
-
每个保留关键字(指令)都是必须是大写字母
-
执行从上到下顺序执行
-
#
表示注释 -
每一个指令都会创建提交一个新的镜像层,并提交!
dockerfile是面向开发的,我们以后要发布项目,做镜像,就需要编写dockerfile文件,这个文件十分简单!
Docker镜像逐渐成为企业交付的标准,必须掌握!
DockerFile:构建文件,定义了一切的步骤,源代码
DockerImages:通过DockerFile构建生成的镜像,最终发布和运行的产品!
Docker容器:容器就是镜像运行起来提供服务器
DockerFile的指令
FROM # 基础镜像,一切从这里开始构建 MAINTAINER # 镜像是谁写的,姓名+邮箱 COPY # 类似ADD,将我们文件拷贝到镜像中 ADD # 将该主机目录下的某文件添加到 镜像的某文件夹下(前提镜像里得有这个目录)比如: ADD my.cnf /etc/mysql ENV # 构建的时候设置环境变量 WORKDIR # 镜像的工作目录 ONBUILD # 当构建一个被继承 DockerFile 这个时候就会运行ONBUILD 的指令。触发指令。 RUN # 镜像构建的时候需要运行的命令 VOLUME # 挂载的目录 EXPOSE # 保留端口配置 CMD # 指定这个容器启动的时候要运行的命令,只有最后一个会生效,可被替代 ENTRYPOINT # 指定这个容器启动的时候要运行的命令,可以追加命令
实战:构建centos镜像
新建一个官方的centos容器,发现有一些指令没有安装
[root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -it centos [root@5a0bad3fbab0 /]# pwd / [root@5a0bad3fbab0 /]# vim bash: vim: command not found [root@5a0bad3fbab0 /]# ifconfig bash: ifconfig: command not found
新建一个dockerfile文件
[root@iZ0jl1aa09m86u22kxbhvhZ home]# ls ceshi docker-test-volume mysql test.java zhangzhen.java [root@iZ0jl1aa09m86u22kxbhvhZ home]# mkdir dockerfile [root@iZ0jl1aa09m86u22kxbhvhZ home]# ls ceshi dockerfile docker-test-volume mysql test.java zhangzhen.java [root@iZ0jl1aa09m86u22kxbhvhZ home]# cd dockerfile # 编辑dockerfile文件,由于 CentOS 已经停止维护的问题并推出了 CentOS Stream 项目,CentOS Linux 8 作为 RHEL 8 的复刻版本,生命周期缩短。如果需要更新 CentOS,需要将镜像从 mirror.centos.org 更改为 vault.centos.org,所以RUN yum -y install vim前面加上一些解决办法命令 [root@iZ0jl1aa09m86u22kxbhvhZ dockerfile]# vim mydockerfile FROM centos MAINTAINER zhangzhen<123456@qq.com> ENV MYPATH /usr/local WORKDIR $MYPATH RUN cd /etc/yum.repos.d/ RUN sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-* RUN sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-* RUN yum makecache RUN yum update -y RUN yum -y install vim RUN yum -y install net-tools EXPOSE 80 CMD echo $MYPATH CMD echo "----end---" CMD /bin/bash # 构建镜像,成功 [root@iZ0jl1aa09m86u22kxbhvhZ dockerfile]# docker build -f mydockerfile -t mycentos:1.0 . ...
现在启动刚刚构建的镜像
[root@5a0bad3fbab0 /]# [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -it mycentos:1.0 [root@ae18f6fce060 local]# ls bin etc games include lib lib64 libexec sbin share src [root@ae18f6fce060 local]# pwd /usr/local [root@ae18f6fce060 local]# ifconfig # 成功
查看镜像如何建构(history)
[root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker history mycentos:1.0 IMAGE CREATED CREATED BY SIZE COMMENT 0c83f0e1f0db 21 minutes ago CMD ["/bin/sh" "-c" "/bin/bash"] 0B buildkit.dockerfile.v0 <missing> 21 minutes ago CMD ["/bin/sh" "-c" "echo \"----end---\""] 0B buildkit.dockerfile.v0 <missing> 21 minutes ago CMD ["/bin/sh" "-c" "echo $MYPATH"] 0B buildkit.dockerfile.v0 <missing> 21 minutes ago EXPOSE map[80/tcp:{}] 0B buildkit.dockerfile.v0 <missing> 21 minutes ago RUN /bin/sh -c yum -y install net-tools # bu… 28.6MB buildkit.dockerfile.v0 <missing> 21 minutes ago RUN /bin/sh -c yum -y install vim # buildkit 67.1MB buildkit.dockerfile.v0 <missing> 21 minutes ago RUN /bin/sh -c yum update -y # buildkit 276MB buildkit.dockerfile.v0 <missing> 22 minutes ago RUN /bin/sh -c yum makecache # buildkit 27.2MB buildkit.dockerfile.v0 <missing> 22 minutes ago RUN /bin/sh -c sed -i 's|#baseurl=http://mir… 8.8kB buildkit.dockerfile.v0 <missing> 22 minutes ago RUN /bin/sh -c sed -i 's/mirrorlist/#mirrorl… 8.82kB buildkit.dockerfile.v0 <missing> 22 minutes ago RUN /bin/sh -c cd /etc/yum.repos.d/ # buildk… 0B buildkit.dockerfile.v0 <missing> 32 minutes ago WORKDIR /usr/local 0B buildkit.dockerfile.v0 <missing> 32 minutes ago ENV MYPATH=/usr/local 0B buildkit.dockerfile.v0 <missing> 32 minutes ago MAINTAINER zhangzhen<123456@qq.com> 0B buildkit.dockerfile.v0 <missing> 2 years ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B <missing> 2 years ago /bin/sh -c #(nop) LABEL org.label-schema.sc… 0B <missing> 2 years ago /bin/sh -c #(nop) ADD file:805cb5e15fb6e0bb0… 231MB
测试相似命令的不同处:
CMD对比ENTRYPOINT
CMD:
[root@iZ0jl1aa09m86u22kxbhvhZ dockerfile]# vim mydocker02 # CMD 指定这个容器启动的时候要运行的命令 FROM centos CMD ["ls","-a"] # 根据此dockerfile构建一个镜像 [root@iZ0jl1aa09m86u22kxbhvhZ dockerfile]# docker build -f mydocker02 -t mycentos:2.0 . # 新建启动该镜像的一个容器 发现该命令ls -a自动执行 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run mycentos:2.0 . .. .dockerenv bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var # 但!!!如果新建启动容器时手动增添指令会报错,因为cmd的清理下 -l 替换了CMD["ls","-a"] 命令,-l不是命令所以保存 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run mycentos:2.0 -l docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-l": executable file not found in $PATH: unknown. ERRO[0000] error waiting for container:
ENTRYPOINT
[root@iZ0jl1aa09m86u22kxbhvhZ dockerfile]# vim mydocker03 FROM centos ENTRYPOINT ["ls","-a"] [root@iZ0jl1aa09m86u22kxbhvhZ dockerfile]# docker build -f mydocker03 -t mycentos:3.0 . [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run mycentos:3.0 . .. .dockerenv bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var # 这里跟上面CMD指令的不一样 手动输入的指令是直接拼接在ENTRYPOINT命令的后面 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run mycentos:3.0 -l total 0 drwxr-xr-x 1 root root 6 Dec 8 14:22 . drwxr-xr-x 1 root root 6 Dec 8 14:22 .. -rwxr-xr-x 1 root root 0 Dec 8 14:22 .dockerenv lrwxrwxrwx 1 root root 7 Nov 3 2020 bin -> usr/bin drwxr-xr-x 5 root root 340 Dec 8 14:22 dev drwxr-xr-x 1 root root 66 Dec 8 14:22 etc drwxr-xr-x 2 root root 6 Nov 3 2020 home lrwxrwxrwx 1 root root 7 Nov 3 2020 lib -> usr/lib lrwxrwxrwx 1 root root 9 Nov 3 2020 lib64 -> usr/lib64 drwx------ 2 root root 6 Sep 15 2021 lost+found drwxr-xr-x 2 root root 6 Nov 3 2020 media drwxr-xr-x 2 root root 6 Nov 3 2020 mnt drwxr-xr-x 2 root root 6 Nov 3 2020 opt dr-xr-xr-x 143 root root 0 Dec 8 14:22 proc dr-xr-x--- 2 root root 162 Sep 15 2021 root drwxr-xr-x 11 root root 163 Sep 15 2021 run lrwxrwxrwx 1 root root 8 Nov 3 2020 sbin -> usr/sbin drwxr-xr-x 2 root root 6 Nov 3 2020 srv dr-xr-xr-x 13 root root 0 Dec 8 14:22 sys drwxrwxrwt 7 root root 171 Sep 15 2021 tmp drwxr-xr-x 12 root root 144 Sep 15 2021 usr drwxr-xr-x 20 root root 262 Sep 15 2021 var
实战:构建tomcat镜像
这里我耗时很久,自己下载的压缩包解压后文件夹目录和狂神的不一样,脚本也有好几处错误,错误的文件夹名称没注意到,所以写的时候一定要看好一遍成功,我看其他人还有因为jdk的版本不对的情况,我对没遇到过这种情况表示遗憾。
-
准备tomcat压缩包,jdk压缩包,资源在本机用ftp上传到服务器,在网站上用wget指令上传到服务器
# 我上传到这个文件夹 下面脚本里写的文件夹名 就是 这个压缩包解压后的文件名,每个人的可能不一样 [root@iZ0jl1aa09m86u22kxbhvhZ tomcat]# ls apache-tomcat-9.0.22.tar.gz jdk-8-linux-x64.tar.gz
-
编写dockerfile文件,官方命名
Dockerfile
, build 会自动寻找这个文件,就不需要-f制定了!# 添加一个文件 [root@iZ0jl1aa09m86u22kxbhvhZ tomcat]# touch readme.txt # 编写dockerfile文件 # 从镜像centos:7开始构建;作者邮箱;将该目录下的readme.txt文件拷贝到镜像的user/local目录下;解压文件夹(解压后名字默认就是没有压缩包后缀名);运行命令(安装vim);设置镜像工作目录;设置环境变量;暴露端口;指定容器启动的时候创建的命令 [root@iZ0jl1aa09m86u22kxbhvhZ tomcat]# vim Dockerfile FROM centos:7 MAINTAINER zhangzhen<29959@qq.com> COPY readme.txt /usr/local/readme.txt ADD jdk-8-linux-x64.tar.gz /usr/local/ ADD apache-tomcat-9.0.22.tar.gz /usr/local/ RUN yum -y install vim ENV MYPATH /usr/local WORKDIR $MYPATH ENV JAVA_HOME /usr/local/jdk1.8.0_301 ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar ENV CATALINE_HOME /usr/local/apache-tomcat-9.0.22 ENV CATALINE_BASH /usr/local/apache-tomcat-9.0.22 ENV PATH $PATH:$JAVA_HOME/bin:$CATALINE_HOME/lib:$CATALINE_HOME/bin EXPOSE 8080 CMD /usr/local/apache-tomcat-9.0.22/bin/startup.sh && tail -F /usr/local/apache-tomcat-9.0.22/logs/catalina.out # 构建镜像 [root@iZ0jl1aa09m86u22kxbhvhZ tomcat]# docker build -t diytomcat . ...# 如果报错: CentOS Linux 8 - AppStream 问题是由于最新版本CentOS Linux 8中vim软件资源下载失败导致。将dockerfile文件中centos版本改成7即可,我上面已经改了 [root@iZ0jl1aa09m86u22kxbhvhZ tomcat]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE diytomcat latest 9ecf4be4ea0d 14 seconds ago 865MB tomcatmy 1.0 5f48ca957587 2 days ago 705MB tomcat latest fb5657adc892 24 months ago 680MB redis 5.0.9-alpine3.11 3661c84ee9d0 3 years ago 29.8MB [root@iZ0jl1aa09m86u22kxbhvhZ tomcat]# docker run -d -p 9093:8080 --name zhangzhentomcat -v /home/tomcat/test:/usr/local/apache-tomcat-9.0.22/webapps/test -v /home/tomcat/tomcatlogs:/usr/local/apache-tomcat-9.0.22/logs diytomcat # 在主机可以访问到 [root@iZ0jl1aa09m86u22kxbhvhZ tomcat]# curl localhost:9093 <!DOCTYPE html> <html lang="en"> <head> ... [root@iZ0jl1aa09m86u22kxbhvhZ tomcat]# ls apache-tomcat-9.0.22.tar.gz Dockerfile jdk-8-linux-x64.tar.gz readme.txt test tomcatlogs # 那么在公网: http://8.130.52.167:9093/ 也可以访问 [root@iZ0jl1aa09m86u22kxbhvhZ tomcat]# cd test [root@iZ0jl1aa09m86u22kxbhvhZ test]# ls # 在test文件夹中新建WEB-INF文件夹 [root@iZ0jl1aa09m86u22kxbhvhZ test]# mkdir WEB-INF [root@iZ0jl1aa09m86u22kxbhvhZ test]# cd WEB-INF # 在WEB-INF文件夹中新建web.xml配置文件 添加一些配置 [root@iZ0jl1aa09m86u22kxbhvhZ WEB-INF]# vim web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" id="WebApp_ID" version="3.0"> </web-app> [root@iZ0jl1aa09m86u22kxbhvhZ WEB-INF]# cd .. # 在test文件夹中新建index.jsp文件 添加一些文本,并脚本打印: 你的 IP 地址 [root@iZ0jl1aa09m86u22kxbhvhZ test]# touch index.jsp [root@iZ0jl1aa09m86u22kxbhvhZ test]# ls index.jsp WEB-INF [root@iZ0jl1aa09m86u22kxbhvhZ test]# vim index.jsp <%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>菜鸟教程(runoob.com)</title> </head> <body> Hello World!<br/> <% System.out.println("你的 IP 地址 "); %> </body> </html> [root@iZ0jl1aa09m86u22kxbhvhZ test]# cd .. [root@iZ0jl1aa09m86u22kxbhvhZ tomcat]# ls apache-tomcat-9.0.22.tar.gz Dockerfile jdk-8-linux-x64.tar.gz readme.txt test tomcatlogs [root@iZ0jl1aa09m86u22kxbhvhZ tomcat]# cd tomcatlogs [root@iZ0jl1aa09m86u22kxbhvhZ tomcatlogs]# ls catalina.2023-12-17.log host-manager.2023-12-17.log localhost_access_log.2023-12-17.txt catalina.out localhost.2023-12-17.log manager.2023-12-17.log # 日志在这个文件里面,我登录网址三次,脚本打印了三次,不过中文没有转码 [root@iZ0jl1aa09m86u22kxbhvhZ tomcatlogs]# cat catalina.out ... 17-Dec-2023 14:14:20.397 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [655] milliseconds ?? IP ?? ?? IP ?? ?? IP ??
发布镜像到平台
发布到Docker Hub中
由于Docker Hub无法访问成功,我先不做这个了
发布到阿里云容器服务中
1、搜索阿里云镜像服务ACR
,在实例列表中,选中一个你的实例(没有创建一个),进入实例中,创建一个命名空间
2、然后创建一个仓库
3、进入仓库中
不同的命名空间和不同仓库名这里生成的指令是不一样的, 照着它的执行就好
根据指定命令执行,依次在服务器中执行:
根据指定命令执行,依次在服务器中执行:
-
先设置登录密码:
-
[root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker login --username=aliyun7941171782 registry.cn-wulanchabu.aliyuncs.com Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE zzd/centos 1.0 bcb7b9d02259 2 years ago 231MB ... # 将想提交的镜像写好,标注版本号 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker tag bcb7b9d02259 registry.cn-wulanchabu.aliyuncs.com/bilibili-zzd/zhangzhentest:1.0 # 把它push到阿里云 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker push registry.cn-wulanchabu.aliyuncs.com/bilibili-zzd/zhangzhentest:1.0 The push refers to repository [registry.cn-wulanchabu.aliyuncs.com/bilibili-zzd/zhangzhentest] 74ddd0ec08fa: Pushed 1.0: digest: sha256:a1b376db451527de20552d0c6df1eedee23dde2c20cc0422ee94d0010b1a50b1 size: 529
Docker网络
先清空所有Docker环境:
首先使用 df
命令查看磁盘的使用情况:
docker system df
返回的结果如下:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 33 8 16.8GB 16.39GB (97%) Containers 9 1 37.43kB 36.44kB (97%) Local Volumes 7 2 0B 0B Build Cache 507 0 21.19GB 21.19GB
请注意,Reclaimable
就是可以恢复的大小,它是通过从总图像大小中减去活动图像的大小来计算的。
接下来就可以使用以下方法来清理:
-
清理停止的容器:使用
docker rm
命令清理停止的容器,命令格式为:docker rm <container_id>
。 -
清理未使用的镜像:使用
docker image prune
命令清理未使用的镜像,命令格式为:docker image prune
。 -
清理无用的数据卷:使用
docker volume prune
命令清理无用的数据卷,命令格式为:docker volume rm $(docker volume ls -qf dangling=true)
。 -
清理未使用的网络:使用
docker network prune
命令清理未使用的网络,命令格式为:docker network prune
。 -
清理Docker缓存:使用
docker builder prune
命令清理Docker缓存,命令格式为:docker builder prune
。 -
清理Docker日志:使用
docker logs
命令查看容器日志,确认无用日志后,使用truncate
命令清空日志文件,命令格式为:truncate -s 0 <logfile>
。
这些清理方法可以根据需要进行组合使用,有效地清理Docker环境中的无用资源,提高资源利用率和性能。最后磁盘使用情况数据为0
理解Docker0
先测试ip addr
[root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE # 主机执行ip addr没问题 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# ip addr # 127.0.0.1:主机回环地址 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever # 172.20.66.228:阿里云内网地址 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:16:3e:04:10:7d brd ff:ff:ff:ff:ff:ff inet 172.20.66.228/20 brd 172.20.79.255 scope global dynamic eth0 valid_lft 315075399sec preferred_lft 315075399sec inet6 fe80::216:3eff:fe04:107d/64 scope link valid_lft forever preferred_lft forever 3: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether b6:97:6b:9f:ba:3b brd ff:ff:ff:ff:ff:ff # 172.17.0.1/16: docker0地址 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:18:af:92:7c brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:18ff:feaf:927c/64 scope link valid_lft forever preferred_lft forever
思考:docker是如何处理容器网络访问的?
# 拉去镜像并新建一个tomcat容器 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -d -P --name tomcat01 tomcat # 进入容器,但是不进去,执行ip addr指令,但是报错了,说明是容器问题 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker exec -it tomcat01 ip addr OCI runtime exec failed: exec failed: unable to start container process: exec: "ip": executable file not found in $PATH: unknown # 解决办法: # 先进入容器中 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker exec -it tomcat01 /bin/bash # 没找到 root@75899fdbfd3e:/usr/local/tomcat# ip addr bash: ip: command not found # 进入运行的容器,查看版本 root@2de81861a3cd:/usr/local/tomcat# cat /etc/os-release PRETTY_NAME="Ubuntu 22.04.1 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.1 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=jammy # 网上查阅资料得知,该系统内置 apt,并非 yum # 系统内置 apt root@2de81861a3cd:/usr/local/tomcat# yum -y install iproute2 bash: yum: command not found # apt 安装失败,版本太低 root@2de81861a3cd:/usr/local/tomcat# apt install -y iproute2 Reading package lists... Done Building dependency tree... Done Reading state information... Done E: Unable to locate package iproute2 # apt 版本升级 root@2de81861a3cd:/usr/local/tomcat# apt update # 再次安装 root@2de81861a3cd:/usr/local/tomcat# apt install -y iproute2 # 成功了 # 查看容器的内部网络地址,ip addr , 发现容器启动的时候会得到一个 eth0@if66 ip地址,docker分配的 root@75899fdbfd3e:/usr/local/tomcat# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 65: eth0@if66: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever # ping指令不行 root@75899fdbfd3e:/usr/local/tomcat# ping 172.17.0.2 bash: ping: command not found # 再安装 root@75899fdbfd3e:/usr/local/tomcat# apt-get install -y iputils-ping Reading package lists... Done Building dependency tree... Done Reading state information... Done The following NEW packages will be installed: iputils-ping 0 upgraded, 1 newly installed, 0 to remove and 69 not upgraded. Need to get 49.8 kB of archives. After this operation, 116 kB of additional disk space will be used. Get:1 http://deb.debian.org/debian bullseye/main amd64 iputils-ping amd64 3:20210202-1 [49.8 kB] Fetched 49.8 kB in 1s (52.1 kB/s) debconf: delaying package configuration, since apt-utils is not installed Selecting previously unselected package iputils-ping. (Reading database ... 12909 files and directories currently installed.) Preparing to unpack .../iputils-ping_3%3a20210202-1_amd64.deb ... Unpacking iputils-ping (3:20210202-1) ... Setting up iputils-ping (3:20210202-1) ... # 好了,不过我们是想看一下能否主机连通容器 root@75899fdbfd3e:/usr/local/tomcat# ping 172.17.0.2 PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data. 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.021 ms 64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.027 ms 64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.029 ms # 退出容器 root@75899fdbfd3e:/usr/local/tomcat# exit # 在主机中能连通容器 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# ping 172.17.0.2 PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data. 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.052 ms 64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.044 ms 64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.039 ms
发现问题:
我们只要安装了docker就会有一个网卡docker0
发现:我们每启动一个docker容器,docker就会给docker容器分配一个ip,
桥接模式,使用的技术是evth-pair
主机执行测试ip addr,发现多了一对网卡66: vethc430bd8@if65 与容器中的测试名字相同 eth0@if66
[root@iZ0jl1aa09m86u22kxbhvhZ ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:16:3e:04:10:7d brd ff:ff:ff:ff:ff:ff inet 172.20.66.228/20 brd 172.20.79.255 scope global dynamic eth0 valid_lft 315056501sec preferred_lft 315056501sec inet6 fe80::216:3eff:fe04:107d/64 scope link valid_lft forever preferred_lft forever 3: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether b6:97:6b:9f:ba:3b brd ff:ff:ff:ff:ff:ff 4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:18:af:92:7c brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:18ff:feaf:927c/64 scope link valid_lft forever preferred_lft forever 66: vethc430bd8@if65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether da:22:a3:ec:39:10 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::d822:a3ff:feec:3910/64 scope link valid_lft forever preferred_lft forever
将这个tomcat01提交成镜像,再新建一个容器tomcat02后,再查看一下ip addr ,发现又多了一对网卡
[root@iZ0jl1aa09m86u22kxbhvhZ ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:16:3e:04:10:7d brd ff:ff:ff:ff:ff:ff inet 172.20.66.228/20 brd 172.20.79.255 scope global dynamic eth0 valid_lft 315056106sec preferred_lft 315056106sec inet6 fe80::216:3eff:fe04:107d/64 scope link valid_lft forever preferred_lft forever 3: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether b6:97:6b:9f:ba:3b brd ff:ff:ff:ff:ff:ff 4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:18:af:92:7c brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:18ff:feaf:927c/64 scope link valid_lft forever preferred_lft forever 66: vethc430bd8@if65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether da:22:a3:ec:39:10 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::d822:a3ff:feec:3910/64 scope link valid_lft forever preferred_lft forever 68: veth4979c11@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 02:47:73:91:72:fd brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet6 fe80::47:73ff:fe91:72fd/64 scope link valid_lft forever preferred_lft forever
# 我们发现这个容器带来网卡,都是一对对的 # evth-pair 就是一堆的虚拟设备接口,它们都是成对出现的,一段连着协议,一段彼此相连 # 正因为有这个特性,evth-pair 充当一个桥梁,连接着各种虚拟网络设备的 # OpenStac,Docker容器之间的连接,OVS的连接,都是使用 evth-pair 技术
此时测试两个容器间能否连通:答案毫无疑问是可以的
[root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker exec -it tomcat02 ping 172.17.0.2 PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data. 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.081 ms 64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.079 ms 64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.065 ms
绘制一个网络模型图:
结论:
# tomcat01和tomcat02是公用的一个路由器,docker0。 # 所有的容器不指定网络的情况下,都是 docker0 路由的,docker会给我们的容器分配一个默认的可用IP # 255.255.0.1/16 域 局域网! # 00000000.00000000.00000000.00000000 # Docker中的所有的网络接口都是虚拟的。虚拟的转发效率高!(内网传递文件!) # 只要容器删除,对应的一对网桥就没了!
--link
听说不太实用了,没有学习
自定义网络
# 查看所有docker网络 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker network ls NETWORK ID NAME DRIVER SCOPE acbfeed06e96 bridge bridge local 813dbef5a81b host host local 4cb995b1a04e none null local
-
bridge:桥接 docker(默认,自己床架也使用bridge模式)
-
none:不配置网络
-
host:和宿主机共享网络
-
container:容器网络连通!(用的少!局限性大)
测试:
# 直接启动命令默认添加了--net bridge 为docker0 docker run -d -P --name tomcat01 tomcat docker run -d -P --name tomcat01 --net bridge tomcat # docker0特点 , 默认: 域名不能访问, --link可以打通 # 我们可以自定义一个网络 # --driver bridge 管理网络的驱动程序(默认为“bridge”) # --subnet 192.168.0.0/16 表示网段 16位65535个 # --gateway 192.168.0.1 IPv4或IPv6主子网网关 # 创建一个网络 起名叫mynet [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet 4859c3a17c4d1707684b7f42ea12501a4c1dbec04c78570995a0e7c4890ab364 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker network ls NETWORK ID NAME DRIVER SCOPE acbfeed06e96 bridge bridge local 813dbef5a81b host host local 4859c3a17c4d mynet bridge local 4cb995b1a04e none null local # 查看mynet网络元数据,能找到网段和网关 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker network inspect mynet [ { "Name": "mynet", "Id": "4859c3a17c4d1707684b7f42ea12501a4c1dbec04c78570995a0e7c4890ab364", "Created": "2023-12-11T16:41:51.694208397+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "192.168.0.0/16", "Gateway": "192.168.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ] # 新建两个容器,指定网络为mynet [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -d -P --name tomcat-net-01 --net mynet tomcatmy:1.0 3b2f0be4e0d1fe82b8e2aedd514b3cbe428bb12791fb2942f89d94c591f90b8d [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -d -P --name tomcat-net-02 --net mynet tomcatmy:1.0 4ef3d8e0a5e954f79282c3e7e85e27881aec0b1d859868aaefa6e5f608c8f882 # 可以看到网络中有两个容器,它们的ip是我们设置的 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker network inspect mynet [ { "Name": "mynet", "Id": "4859c3a17c4d1707684b7f42ea12501a4c1dbec04c78570995a0e7c4890ab364", "Created": "2023-12-11T16:41:51.694208397+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "192.168.0.0/16", "Gateway": "192.168.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "3b2f0be4e0d1fe82b8e2aedd514b3cbe428bb12791fb2942f89d94c591f90b8d": { "Name": "tomcat-net-01", "EndpointID": "922ef17c99421088361d25f1e34f1f9b6aad41b03ef7d747bd25219b98e5a50d", "MacAddress": "02:42:c0:a8:00:02", "IPv4Address": "192.168.0.2/16", "IPv6Address": "" }, "4ef3d8e0a5e954f79282c3e7e85e27881aec0b1d859868aaefa6e5f608c8f882": { "Name": "tomcat-net-02", "EndpointID": "9b98bbc4c7badabcbcb728287586693e17f0525dd436092e9440e858c2dcc72f", "MacAddress": "02:42:c0:a8:00:03", "IPv4Address": "192.168.0.3/16", "IPv6Address": "" } }, "Options": {}, "Labels": {} } ] # 容器互相连接一下 现在不使用--link也可以ping名字了 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker exec -it tomcat-net-01 ping 192.168.0.3 PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data. 64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.087 ms 64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.057 ms [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker exec -it tomcat-net-02 ping 192.168.0.2 PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data. 64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.036 ms 64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.034 ms
我们自定义的网络docker都已经帮我们维护好了对应的关系,推荐我们平时这样使用网络!
好处:
-
redis - 不同的集群使用不同的网络,保证集群是安全和健康的
-
mysql - 不同的集群使用不同的网络,保证集群是安全和健康的
网络连通
不同网卡之间无法联通:
[root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -d -P --name tomcat01 tomcatmy:1.0 f63e5339bf0fa0758e84416f7306cf5ebad77667b5a71ed72ca4bb4bcc13fb50 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -d -P --name tomcat02 tomcatmy:1.0 ece14f91bffcb97000b48573e0c754dbc6147a98fb6b25e9d0fca94dcf22e4ba [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -d -P --name tomcat-net-01 --net mynet tomcatmy:1.0 c866456fe67d20befe13a743101ae93d12202cfefa1ccf2cc1d8c995f2188b26 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker run -d -P --name tomcat-net-02 --net mynet tomcatmy:1.0 114ec37915f17dd9374089121cf9b56c1f17921a13bb1be4968cb95f2ffc3499 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 114ec37915f1 tomcatmy:1.0 "catalina.sh run" 4 seconds ago Up 3 seconds 0.0.0.0:32786->8080/tcp, :::32786->8080/tcp tomcat-net-02 c866456fe67d tomcatmy:1.0 "catalina.sh run" 9 seconds ago Up 8 seconds 0.0.0.0:32785->8080/tcp, :::32785->8080/tcp tomcat-net-01 ece14f91bffc tomcatmy:1.0 "catalina.sh run" 44 seconds ago Up 43 seconds 0.0.0.0:32784->8080/tcp, :::32784->8080/tcp tomcat02 f63e5339bf0f tomcatmy:1.0 "catalina.sh run" 47 seconds ago Up 46 seconds 0.0.0.0:32783->8080/tcp, :::32783->8080/tcp tomcat01 # 不同网络中的容器是不可能直接连通的 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker exec -it tomcat01 ping tomcat-net-01 ping: tomcat-net-01: Name or service not known
# 网络跟网络之间不能打通,但容器跟网络之间可以打通: 打通mynet网络和tomcat01容器 # 连通之后就是将tomcat01放到了mynet网络下,一个容器两个ip地址! [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker network connect mynet tomcat01 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker network inspect mynet [ { "Name": "mynet", "Id": "4859c3a17c4d1707684b7f42ea12501a4c1dbec04c78570995a0e7c4890ab364", "Created": "2023-12-11T16:41:51.694208397+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "192.168.0.0/16", "Gateway": "192.168.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "114ec37915f17dd9374089121cf9b56c1f17921a13bb1be4968cb95f2ffc3499": { "Name": "tomcat-net-02", "EndpointID": "449ca727c57d007835284afe7e012cd45185588116bf5cf64690b448920cbcd0", "MacAddress": "02:42:c0:a8:00:03", "IPv4Address": "192.168.0.3/16", "IPv6Address": "" }, "c866456fe67d20befe13a743101ae93d12202cfefa1ccf2cc1d8c995f2188b26": { "Name": "tomcat-net-01", "EndpointID": "ead74fded6e7cfbe7c7918ef2a85d0ac1f668812e1b67365b4248223c713ab5e", "MacAddress": "02:42:c0:a8:00:02", "IPv4Address": "192.168.0.2/16", "IPv6Address": "" }, "f63e5339bf0fa0758e84416f7306cf5ebad77667b5a71ed72ca4bb4bcc13fb50": { "Name": "tomcat01", "EndpointID": "315f8def4980dc9ecf7931e05d30e58c0450418540966df058c2ff40c07b3ba2", "MacAddress": "02:42:c0:a8:00:04", "IPv4Address": "192.168.0.4/16", "IPv6Address": "" } }, "Options": {}, "Labels": {} } ] # 现在tomcat01和tomcat-net的在一个网络里了,可以连通了 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker exec -it tomcat01 ping tomcat-net-01 PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data. 64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.078 ms 64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.054 ms # tomcat02依旧是不能连通tomcat-net的 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker exec -it tomcat02 ping tomcat-net-01 ping: tomcat-net-01: Name or service not known
Redis集群
# 创建redis网卡 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker network create redis --subnet 172.38.0.0/16 852ff999096f38d92c80e8822172b4e85b1183714d39beae6afaab2200b25f41 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker network ls NETWORK ID NAME DRIVER SCOPE acbfeed06e96 bridge bridge local 813dbef5a81b host host local 4859c3a17c4d mynet bridge local 4cb995b1a04e none null local 852ff999096f redis bridge local # 查看redis网卡详细信息 [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker network inspect redis [ { "Name": "redis", "Id": "852ff999096f38d92c80e8822172b4e85b1183714d39beae6afaab2200b25f41", "Created": "2023-12-11T21:01:35.671496238+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.38.0.0/16" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ] # 通过脚本创建6个redis配置 for port in $(seq 1 6); \ do \ mkdir -p /mydata/redis/node-${port}/conf touch /mydata/redis/node-${port}/conf/redis.conf cat << EOF >/mydata/redis/node-${port}/conf/redis.conf port 6379 bind 0.0.0.0 cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 cluster-announce-ip 172.38.0.1${port} cluster-announce-port 6379 cluster-announce-bus-port 16379 appendonly yes EOF done # 通过脚本创建6个redis容器 for port in $(seq 1 6); \ do \ docker run -p 637${port}:5379 -p 1637${port}:16379 --name redis-${port} \ -v /mydata/redis/node-${port}/data:/data \ -v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \ -d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf; \ done [root@iZ0jl1aa09m86u22kxbhvhZ ~]# for port in $(seq 1 6); \ > do \ > mkdir -p /mydata/redis/node-${port}/conf > touch /mydata/redis/node-${port}/conf/redis.conf > cat << EOF >/mydata/redis/node-${port}/conf/redis.conf > port 6379 > bind 0.0.0.0 > cluster-enabled yes > cluster-config-file nodes.conf > cluster-node-timeout 5000 > cluster-announce-ip 172.38.0.1${port} > cluster-announce-port 6379 > cluster-announce-bus-port 16379 > appendonly yes > EOF > done [root@iZ0jl1aa09m86u22kxbhvhZ redis]# for port in $(seq 1 6); \ > do \ > docker run -p 637${port}:5379 -p 1637${port}:16379 --name redis-${port} \ > -v /mydata/redis/node-${port}/data:/data \ > -v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \ > -d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf; \ > done ... [root@iZ0jl1aa09m86u22kxbhvhZ redis]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a54ec1d0737d redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 32 seconds ago Up 31 seconds 6379/tcp, 0.0.0.0:6376->5379/tcp, :::6376->5379/tcp, 0.0.0.0:16376->16379/tcp, :::16376->16379/tcp redis-6 c764180d549e redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 32 seconds ago Up 31 seconds 6379/tcp, 0.0.0.0:6375->5379/tcp, :::6375->5379/tcp, 0.0.0.0:16375->16379/tcp, :::16375->16379/tcp redis-5 76cc94dd81e9 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 32 seconds ago Up 32 seconds 6379/tcp, 0.0.0.0:6374->5379/tcp, :::6374->5379/tcp, 0.0.0.0:16374->16379/tcp, :::16374->16379/tcp redis-4 b03c2e8967a9 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 33 seconds ago Up 32 seconds 6379/tcp, 0.0.0.0:6373->5379/tcp, :::6373->5379/tcp, 0.0.0.0:16373->16379/tcp, :::16373->16379/tcp redis-3 901643c042af redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 33 seconds ago Up 32 seconds 6379/tcp, 0.0.0.0:6372->5379/tcp, :::6372->5379/tcp, 0.0.0.0:16372->16379/tcp, :::16372->16379/tcp redis-2 f747486f45a1 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 34 seconds ago Up 32 seconds 6379/tcp, 0.0.0.0:6371->5379/tcp, :::6371->5379/tcp, 0.0.0.0:16371->16379/tcp, :::16371->16379/tcp redis-1 # 进入redis容器命令不是bash,是sh [root@iZ0jl1aa09m86u22kxbhvhZ ~]# docker exec -it redis-1 /bin/sh /data # ls appendonly.aof nodes.conf # redis集群 /data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --c luster-replicas 1 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 172.38.0.15:6379 to 172.38.0.11:6379 Adding replica 172.38.0.16:6379 to 172.38.0.12:6379 Adding replica 172.38.0.14:6379 to 172.38.0.13:6379 M: ed451062b6637e7e0ea258100803725c0d92f922 172.38.0.11:6379 slots:[0-5460] (5461 slots) master M: 9fefdea4cce147dd12d2162c91f47e1a836553fe 172.38.0.12:6379 slots:[5461-10922] (5462 slots) master M: 1127023db33c041d5f7f801f9badf4fa418c8dce 172.38.0.13:6379 slots:[10923-16383] (5461 slots) master S: 69449926d26e356fb2328d8fe05d699174d98799 172.38.0.14:6379 replicates 1127023db33c041d5f7f801f9badf4fa418c8dce S: 86a886441ef278d1e3d43f87edd5235c609ee854 172.38.0.15:6379 replicates ed451062b6637e7e0ea258100803725c0d92f922 S: a708c27df5cec5f650f924898c81fddf00f3a17b 172.38.0.16:6379 replicates 9fefdea4cce147dd12d2162c91f47e1a836553fe Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ... >>> Performing Cluster Check (using node 172.38.0.11:6379) M: ed451062b6637e7e0ea258100803725c0d92f922 172.38.0.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) M: 1127023db33c041d5f7f801f9badf4fa418c8dce 172.38.0.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 86a886441ef278d1e3d43f87edd5235c609ee854 172.38.0.15:6379 slots: (0 slots) slave replicates ed451062b6637e7e0ea258100803725c0d92f922 M: 9fefdea4cce147dd12d2162c91f47e1a836553fe 172.38.0.12:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 69449926d26e356fb2328d8fe05d699174d98799 172.38.0.14:6379 slots: (0 slots) slave replicates 1127023db33c041d5f7f801f9badf4fa418c8dce S: a708c27df5cec5f650f924898c81fddf00f3a17b 172.38.0.16:6379 slots: (0 slots) slave replicates 9fefdea4cce147dd12d2162c91f47e1a836553fe # 进入redis集群 /data # redis-cli -c 127.0.0.1:6379> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:203 cluster_stats_messages_pong_sent:215 cluster_stats_messages_sent:418 cluster_stats_messages_ping_received:210 cluster_stats_messages_pong_received:203 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:418 # 查看集群节点 能看到3个主机(master)3个从机(slave) 127.0.0.1:6379> cluster nodes 1127023db33c041d5f7f801f9badf4fa418c8dce 172.38.0.13:6379@16379 master - 0 1702302653811 3 connected 10923-16383 86a886441ef278d1e3d43f87edd5235c609ee854 172.38.0.15:6379@16379 slave ed451062b6637e7e0ea258100803725c0d92f922 0 1702302654000 5 connected 9fefdea4cce147dd12d2162c91f47e1a836553fe 172.38.0.12:6379@16379 master - 0 1702302654000 2 connected 5461-10922 69449926d26e356fb2328d8fe05d699174d98799 172.38.0.14:6379@16379 slave 1127023db33c041d5f7f801f9badf4fa418c8dce 0 1702302654512 4 connected ed451062b6637e7e0ea258100803725c0d92f922 172.38.0.11:6379@16379 myself,master - 0 1702302654000 1 connected 0-5460 a708c27df5cec5f650f924898c81fddf00f3a17b 172.38.0.16:6379@16379 slave 9fefdea4cce147dd12d2162c91f47e1a836553fe 0 1702302654813 6 connected # 设一个参数a,数值为b 随机被一个主机保存(这里是redis-3) 127.0.0.1:6379> set a b -> Redirected to slot [15495] located at 172.38.0.13:6379 OK #此时在外面将redis-3停掉,应该会被从机保存,此时获取a,发现是从机redis-4保存了,它现在变成主机了master 127.0.0.1:6379> cluster nodes 1127023db33c041d5f7f801f9badf4fa418c8dce 172.38.0.13:6379@16379 master,fail - 1702303013188 1702303011483 3 connected 86a886441ef278d1e3d43f87edd5235c609ee854 172.38.0.15:6379@16379 slave ed451062b6637e7e0ea258100803725c0d92f922 0 1702303124598 5 connected 9fefdea4cce147dd12d2162c91f47e1a836553fe 172.38.0.12:6379@16379 master - 0 1702303123697 2 connected 5461-10922 69449926d26e356fb2328d8fe05d699174d98799 172.38.0.14:6379@16379 master - 0 1702303124097 7 connected 10923-16383 ed451062b6637e7e0ea258100803725c0d92f922 172.38.0.11:6379@16379 myself,master - 0 1702303122000 1 connected 0-5460 a708c27df5cec5f650f924898c81fddf00f3a17b 172.38.0.16:6379@16379 slave 9fefdea4cce147dd12d2162c91f47e1a836553fe 0 1702303124698 6 connected 127.0.0.1:6379> get a -> Redirected to slot [15495] located at 172.38.0.14:6379 "b" 172.38.0.14:6379> get a "b"
springboot微服务打包Docker镜像
创建一个springboot项目:
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.docker</groupId> <artifactId>docker-01</artifactId> <version>0.0.1-SNAPSHOT</version> <name>docker-01</name> <description>docker-01</description> <properties> <java.version>1.8</java.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <spring-boot.version>2.7.6</spring-boot.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-dependencies</artifactId> <version>${spring-boot.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>${spring-boot.version}</version> <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project>
编写一个Dockerfile文件:
FROM java:8 COPY *.jar /app.jar CMD ["--server.port=8080"] EXPOSE 8080 ENTRYPOINT ["java", "-jar","/app.jar"]
用Xftp上传文件到新建的/home/idea目录下
服务器构建镜像启动服务:
[root@iZ0jl1aa09m86u22kxbhvhZ idea]# ls docker-01-0.0.1-SNAPSHOT.jar Dockerfile # 构建镜像 [root@iZ0jl1aa09m86u22kxbhvhZ idea]# docker build -t springbootdocker . ... [root@iZ0jl1aa09m86u22kxbhvhZ idea]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE springbootdocker latest f65c935ae8e9 17 seconds ago 661MB # 启动服务 [root@iZ0jl1aa09m86u22kxbhvhZ idea]# docker run -d -P --name zhangzhenspringboot springbootdocker 29ca6733edfa3291d70e136b6589e5dd52cddead2804bc7fcbe5010b45e016ec [root@iZ0jl1aa09m86u22kxbhvhZ idea]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 29ca6733edfa springbootdocker "java -jar /app.jar …" 16 seconds ago Up 14 seconds 0.0.0.0:32768->8080/tcp, :::32768->8080/tcp zhangzhenspringboot # 远程访问本机的端口号能访问到 访问本机的32768端口就是在访问服务器的32768端口 [root@iZ0jl1aa09m86u22kxbhvhZ idea]# curl localhost:32768 <html> <body> <h1>hello word!!!</h1> <p>this is a html page</p> </body> </html>[root@iZ0jl1aa09m86u22kxbhvhZ idea]#
访问公网:http://8.130.52.167:32768/hello 能访问到,成功了!