Docker网络

本文详细介绍了Docker的网络机制,包括Docker0桥接网络,容器间的连接方式,自定义网络的创建与使用,以及如何通过`docker network connect`实现不同网络容器的连通。此外,还展示了如何实战部署Redis集群,强调了使用自定义网络的好处和安全性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Docker网络

docker 是如何处理容器网络访问的?

Docker0

删除影响实验结果的容器

先删除所有容器 docker rm -f $(docker ps -aq)

查看主机网卡信息

查看网卡信息 ip addr
网卡地址
可以发现有一个 docker0网卡信息

给原生tomcat容器添加iproute2组件并提交成自定义镜像ip-tomcat

原生的tomcat没有网络控制功能,因此需要自己安装iproute2net-toolsiputils-ping,然后提交为自定义镜像tomcat容器ip-tomcat

# 运行一个原生tomcat容器
centos> docker run -it --name tomcat01 tomcat /bin/bash
# 安装iproute2
tomcat> apt update && apt install -y iproute2 && apt install -y net-tools && apt install -y iputils-ping
# ..... 安装中
tomcat> exit
# 提交为自定义镜像 ip-tomcat
centos> docker commit tomcat01 ip-tomcat 
# 删除tomcat01容器 
centos> docker rm -f tomcat01

使用ip-tomcat镜像 创建容器并用ip addr查看网卡信息

tomcat01

[root@VM-8-9-centos ~]# docker exec -it tomcat01 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
261: eth0@if262: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

主机

[root@VM-8-9-centos ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 52:54:00:f9:05:3c brd ff:ff:ff:ff:ff:ff
    inet 10.0.8.9/22 brd 10.0.11.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fef9:53c/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:43:1f:f2:f4 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:43ff:fe1f:f2f4/64 scope link 
       valid_lft forever preferred_lft forever
262: veth7e25f43@if261: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 1a:21:ef:cc:9b:91 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::1821:efff:fecc:9b91/64 scope link 
       valid_lft forever preferred_lft forever

可以发现tomcat01容器和主机生成了一对数字相差1的网卡

其中tomcat01中的eth0@if163是由docker分配的!

测试主机与容器间网卡是否能够ping通

[root@VM-8-9-centos ~]# ping  172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.076 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.060 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.041 ms

发现linux主机可以ping通docker容器!

容器与容器之间也是可以相互ping通

创建新的ip-tomcat容器tomcat02尝试ping通tomcat01

[root@VM-8-9-centos ~]# docker run -d --name tomcat02 ip-tomcat
[root@VM-8-9-centos ~]# docker exec -it tomcat02 ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.100 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.076 ms

发现是可以ping通的。

原理

每次启动一个docker容器,docker就会给docker容器分配一个ip,只要安装了docker就会有一个网卡docker0桥接模式,使用evth-pair技术!

例如上一个例子中创建tomcat01容器时除了tomcat01生成的网卡162::eth0@if163,主机也生成了一个对应的网卡163:veth7e25f43@if162,形成一个evth-pair

evth-pair 是一堆虚拟设备接口,成对出现,一段连接协议,一段彼此相连

evth-pair 充当一个桥梁,连接各种虚拟网络设备

请添加图片描述
所有容器在不指定网络的情况下,都是由 docker0 路由的, docker 会给容器分配一个默认的可用IP

小结

Docker 使用的是 Linux 的桥接,宿主机中是一个 Docker 容器的网桥 docker0
请添加图片描述
Docker中的所有网络接口都是虚拟的。虚拟的转发效率非常高。(内网传递数据)

若容器被暂停或删除,对应的网桥对就会消失。(仅在运行时存在,每次运行重新生成!)

–link

高可用 -> 容器ip地址更换项目仍然能够定位到指定微服务(通过唯一名字定位)

直接尝试两个容器间ping是无法ping通的

[root@VM-8-9-centos ~]# docker exec -it tomcat02 ping tomcat01
ping: tomcat01: Name or service not known

使用--link参数可以将两个容器连接,之后便可以直接通过容器名字(或容器ID)ping通

[root@VM-8-9-centos ~]# docker run -d --name tomcat03 --link tomcat02 ip-tomcat
ad53d25fe3ceaef901d7c16ea819d54c7a3bf29c211be5d9e12fb4aa3eaa8225
[root@VM-8-9-centos ~]# docker exec -it tomcat03 ping tomcat02
PING tomcat02 (172.17.0.3) 56(84) bytes of data.
64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=0.111 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=2 ttl=64 time=0.061 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=3 ttl=64 time=0.073 ms

注意:
通过link连接的是单向ping通,不能反向ping通,两个容器都需要link才可以实现双向ping通!!!

[root@VM-8-9-centos ~]# docker exec -it tomcat02 ping tomcat03
ping: tomcat03: Name or service not known

查看docker网络信息 docker network
帮助文档

Usage:  docker network COMMAND

Manage networks

Commands:
  connect     Connect a container to a network
  create      Create a network
  disconnect  Disconnect a container from a network
  inspect     Display detailed information on one or more networks
  ls          List networks
  prune       Remove all unused networks
  rm          Remove one or more networks

docker network ls

[root@VM-8-9-centos ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
b31f3de3e5ff   bridge    bridge    local
93cde78e2b58   host      host      local
0dec21c5d6a8   none      null      local

docker network inspect (网络ID)

[root@VM-8-9-centos ~]# docker network inspect b31f3de3e5ff
[
    {
        "Name": "bridge",
        "Id": "b31f3de3e5ff0549da27a9a25f0accc96d207ba28a46a37c79985e8bd1cf2da6",
        "Created": "2022-03-29T23:11:54.436421179+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "25318c1306be0deedaab5ebc9b2bc0134383eb0f304af7d7cb02fe7caad06d27": {
                "Name": "tomcat02",
                "EndpointID": "24dcc29f5a58d960608916fc416cfb6380ea2a57e144e8a170c509ebcc981661",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            },
            "aadfe9d280298887a4b4971801f360d78578eaf32337a6625d713b8dc04f95a2": {
                "Name": "tomcat01",
                "EndpointID": "c19611d2471c532f551d4a42cf440e678d18136465ddf53ba8c8c532fec923f6",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            },
            "ad53d25fe3ceaef901d7c16ea819d54c7a3bf29c211be5d9e12fb4aa3eaa8225": {
                "Name": "tomcat03",
                "EndpointID": "5c398ff94f5cf5e96eb9313a3e61b472b3829f99605be21903d005d255ba4859",
                "MacAddress": "02:42:ac:11:00:04",
                "IPv4Address": "172.17.0.4/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

原理:在本地/etc/hosts中配置了映射关系

其实就上述tomcat03就是在本地配置了tomcat02的IP映射关系

[root@VM-8-9-centos ~]# docker exec -it tomcat03 cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3      tomcat02 25318c1306be
172.17.0.4      ad53d25fe3ce

本质:--link 就是在hosts配置中添加了一个 172.17.0.3 tomcat02 25318c1306be

现在不建议使用 --link !!
一般自定义网络,不使用 docker0docker0 不支持容器名连接访问!

自定义网络

Usage:  docker network COMMAND

Manage networks

Commands:
  connect     Connect a container to a network
  create      Create a network
  disconnect  Disconnect a container from a network
  inspect     Display detailed information on one or more networks
  ls          List networks
  prune       Remove all unused networks
  rm          Remove one or more networks

网络模式

模式解释
bridge桥接 docker (默认)
none不配置网络
host和宿主机共享网络
container容器网络连通(用得少)

默认带有参数 --net bridge

docker run -d --name tomcat01 ip-tomcat
docker run -d --name tomcat01 --net ip-tomcat

默认使用的 docker0,域名不能访问!可以通过 --link 连通。

创建网络

Usage:  docker network create [OPTIONS] NETWORK

Create a network

Options:
      --attachable           Enable manual container attachment
      --aux-address map      Auxiliary IPv4 or IPv6 addresses used by Network driver
                             (default map[])
      --config-from string   The network from which to copy the configuration
      --config-only          Create a configuration only network
  -d, --driver string        Driver to manage the Network (default "bridge")
      --gateway strings      IPv4 or IPv6 Gateway for the master subnet
      --ingress              Create swarm routing-mesh network
      --internal             Restrict external access to the network
      --ip-range strings     Allocate container ip from a sub-range
      --ipam-driver string   IP Address Management Driver (default "default")
      --ipam-opt map         Set IPAM driver specific options (default map[])
      --ipv6                 Enable IPv6 networking
      --label list           Set metadata on a network
  -o, --opt map              Set driver specific options (default map[])
      --scope string         Control the network's scope
      --subnet strings       Subnet in CIDR format that represents a network segment

创建一个简单的网络 mynet

  • -- driver bridge 网络模式
  • --subnet 192.168.0.0/16 子网地址 -> (192.168.0.2 ~ 192.168.255.255)
  • --gateway 192.168.0.1 网关
[root@VM-8-9-centos ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
dc3360444237c65c301416861b9c8a04c500354d2a5bfa715ccf86d01a07fc32

[root@VM-8-9-centos ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
b31f3de3e5ff   bridge    bridge    local
93cde78e2b58   host      host      local
dc3360444237   mynet     bridge    local
0dec21c5d6a8   none      null      local

[root@VM-8-9-centos ~]# docker network inspect mynet 
[
    {
        "Name": "mynet",
        "Id": "dc3360444237c65c301416861b9c8a04c500354d2a5bfa715ccf86d01a07fc32",
        "Created": "2022-03-31T17:31:17.128610062+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

启动一个容器,设置网络为mynet

[root@VM-8-9-centos ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
dc3360444237c65c301416861b9c8a04c500354d2a5bfa715ccf86d01a07fc32

[root@VM-8-9-centos ~]# docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "dc3360444237c65c301416861b9c8a04c500354d2a5bfa715ccf86d01a07fc32",
        "Created": "2022-03-31T17:31:17.128610062+08:00",
        "Scope": "local",
        "Driver": "bridge",
        .....
        "ConfigOnly": false,
        "Containers": {
            "cbc91c79cb279e4192452ea5c0cf5f37f21c77c9272bd4c0b98bf7861263e04a": {
                "Name": "tomcat-net-01",
                "EndpointID": "8c8ac809cd39021a82fe350a1d7c5898dcdf05dc925272fc0c7bad03b0212de0",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

使用自定义创建的网络docker,可以通过容器名或容器ID相互ping通!
所以推荐使用自定义网络!!!

[root@VM-8-9-centos ~]# docker exec -it tomcat-net-01 ping tomcat-net-02
PING tomcat-net-02 (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.108 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.078 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=3 ttl=64 time=0.071 ms
[root@VM-8-9-centos ~]# docker exec -it tomcat-net-02 ping tomcat-net-01
PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.063 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.076 ms

好处:不同的集群使用不同的网络,保证集群是安全和健康的。

网络连通 docker network connect

不同网络中容器无法通过名字连通

开启一个docker0网络的镜像tomcat01 和 一个 mynet 网络的镜像 tomcat-net-01

尝试相互ping,发现都无法ping通~

[root@VM-8-9-centos ~]# docker exec -it tomcat01 ping tomcat-net-01 
ping: tomcat-net-01: Name or service not known
[root@VM-8-9-centos ~]# docker exec -it tomcat-net-01 ping tomcat01
ping: tomcat01: Name or service not known

docker network connect 将一个容器连接到一个网络中

Usage:  docker network connect [OPTIONS] NETWORK CONTAINER

Connect a container to a network

Options:
      --alias strings           Add network-scoped alias for the container
      --driver-opt strings      driver options for the network
      --ip string               IPv4 address (e.g., 172.30.100.104)
      --ip6 string              IPv6 address (e.g., 2001:db8::33)
      --link list               Add link to another container
      --link-local-ip strings   Add a link-local address for the container

测试

[root@VM-8-9-centos ~]# docker network connect mynet tomcat01

输入docker network inspect mynet 可以发现mynet下也有tomcat01容器了,原理是给它再分配了一个网络IP
在这里插入图片描述
输入docker exec -it tomcat01 ip addr 可以发现多了一个192.168.0.3IP,这就是由mynet分配的网络IP
在这里插入图片描述
尝试相互ping,发现都可以ping通了。 而且此时tomcat01是和整个mynet网络所有容器都连通了!!

[root@VM-8-9-centos ~]# docker exec -it tomcat01 ping tomcat-net-01 
PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.081 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.073 ms

[root@VM-8-9-centos ~]# docker exec -it tomcat01 ping tomcat-net-01 
PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.084 ms

实战部署 Redis集群

创建一个 redis 专用网络

docker network create redis --subnet 172.38.0.0/16

创建6个redis容器并挂载数据卷

for id in $(seq 1 6); \
do \
docker run --name redis-${id} \
-v /root/data/redis/node-${id}/data/:/data/ \
-v /root/data/redis/node-${id}/conf/:/etc/redis/conf \
-d --net redis --ip 172.38.0.1${id} redis 
done

创建6个redis配置

for id in $(seq 1 6); \
do \
mkdir -p /root/data/redis/node-${id}/conf
touch /root/data/redis/node-${id}/conf/redis.conf
cat << EOF >/root/data/redis/node-${id}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes 
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${id}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done

启动6个redis服务

for id in $(seq 1 6); \
do \
docker run --name redis-${id} \
-v /root/data/redis/node-${id}/data/:/data/ \
-v /root/data/redis/node-${id}/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.1${id} redis redis-server /etc/redis/redis.conf; 
done

搭建集群

docker exec -it redis-1 \
redis-cli --cluster create \
172.38.0.11:6379 \
172.38.0.12:6379 \
172.38.0.13:6379 \
172.38.0.14:6379 \
172.38.0.15:6379 \
172.38.0.16:6379 \
--cluster-replicas 1
[root@VM-8-9-centos data]# docker exec -it redis-1 redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: 1ac333a27b8d3aecdcd3b14d90428869fde4ed98 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
M: fe99d242067af86f124bcc691a682b25345e209c 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
M: 8cf43330c5eac644e5b94c608d26902097760dc1 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
S: 174b8a5fddbf7741aa517af5ae465960ee9fb7c9 172.38.0.14:6379
   replicates 8cf43330c5eac644e5b94c608d26902097760dc1
S: 0015c60ff195dfd1a723a96ea327cec6f7fc0413 172.38.0.15:6379
   replicates 1ac333a27b8d3aecdcd3b14d90428869fde4ed98
S: 92f2bdb68cc3ff268397e4af37ece72be11009df 172.38.0.16:6379
   replicates fe99d242067af86f124bcc691a682b25345e209c
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: 1ac333a27b8d3aecdcd3b14d90428869fde4ed98 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 0015c60ff195dfd1a723a96ea327cec6f7fc0413 172.38.0.15:6379
   slots: (0 slots) slave
   replicates 1ac333a27b8d3aecdcd3b14d90428869fde4ed98
S: 174b8a5fddbf7741aa517af5ae465960ee9fb7c9 172.38.0.14:6379
   slots: (0 slots) slave
   replicates 8cf43330c5eac644e5b94c608d26902097760dc1
M: 8cf43330c5eac644e5b94c608d26902097760dc1 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 92f2bdb68cc3ff268397e4af37ece72be11009df 172.38.0.16:6379
   slots: (0 slots) slave
   replicates fe99d242067af86f124bcc691a682b25345e209c
M: fe99d242067af86f124bcc691a682b25345e209c 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

查看集群信息 cluster info

root@7593134c0dcb:/data# redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:279
cluster_stats_messages_pong_sent:287
cluster_stats_messages_sent:566
cluster_stats_messages_ping_received:282
cluster_stats_messages_pong_received:279
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:566

执行一次操作,设置一个值 (工作被集群分配)

127.0.0.1:6379> set a b
-> Redirected to slot [15495] located at 172.38.0.13:6379
OK
# 在redis-1上操作,redis集群分配给redis-3来做

关闭redis-3,在redis-1上尝试获取刚刚设置的键值对

[root@VM-8-9-centos ~]# docker stop redis-3

发现可以获取到值,是从redis-4上获取的

127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 172.38.0.14:6379
"b"
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值