failed to set bridge addr: “cni0“ already has an IP address different from 10.244.2.1/24

 failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24

的解决方式

  Warning  FailedCreatePodSandBox  3m18s                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1506a90c486e2c187e21e8fb4b6888e5d331235f48eebb5cf44121cc587a6f05" network for pod "ds-d58vg": networkPlugin cni failed to set up pod "ds-d58vg_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24
  Normal   SandboxChanged          3m1s (x12 over 4m13s)  kubelet            Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  2m59s (x4 over 3m14s)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a8dc84257ca6f4543c223735dd44e79c1d001724a54cd20ab33e3a7596fba5c9" network for pod "ds-d58vg": networkPlugin cni failed to set up pod "ds-d58vg_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24

启动pod时,查看pod一直报如上的错误,

# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.1  netmask 255.255.255.0  broadcast 10.244.0.255
        inet6 fe80::80bc:10ff:feb0:9d1b  prefixlen 64  scopeid 0x20<link>
        ether 82:bc:10:b0:9d:1b  txqueuelen 1000  (Ethernet)
        RX packets 1478990  bytes 119510314 (113.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1486862  bytes 136242849 (129.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

...

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::605e:12ff:feb8:7ce3  prefixlen 64  scopeid 0x20<link>
        ether 62:5e:12:b8:7c:e3  txqueuelen 0  (Ethernet)
        RX packets 55074  bytes 9896264 (9.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 57738  bytes 5642813 (5.3 MiB)
        TX errors 0  dropped 10 overruns 0  carrier 0  collisions 0

 

# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

如果直接删掉cni0等信息

# ifconfig cni0 down
# ip link delete cni0

这样操作后,虽然这个错能解决,pod也运行正常,但会将dns的pod挤掉

# kubectl  get po -o wide -n kube-system 
NAME                                        READY   STATUS             RESTARTS         AGE   IP             NODE                NOMINATED NODE   READINESS GATES
coredns-6d8c4cb4d-7lswb                     0/1     CrashLoopBackOff   9 (116s ago)     22h   10.244.0.3     master                <none>           <none>
coredns-6d8c4cb4d-84z48                     0/1     CrashLoopBackOff   9 (2m6s ago)     22h   10.244.0.2     master                <none>           <none>
ds-4cqxm                                    1/1     Running            0                33m   10.244.0.4     master                <none>           <none>
ds-d58vg                                    1/1     Running            0                33m   10.244.2.185   node2                 <none>           <none>
ds-sjxwn                                    1/1     Running            0                33m   10.244.1.48    node1                 <none>           <none>

此时查看coredns的pod信息

# kubectl  describe po coredns-6d8c4cb4d-84z48 -n kube-system 
Name:                 coredns-6d8c4cb4d-84z48
Namespace:            kube-system
Priority:             2000000000
......

Events:
  Type     Reason     Age                    From     Message
  ----     ------     ----                   ----     -------
  Warning  Unhealthy  28m (x5 over 29m)      kubelet  Liveness probe failed: Get "http://10.244.0.2:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Normal   Killing    28m                    kubelet  Container coredns failed liveness probe, will be restarted
  Normal   Pulled     28m (x2 over 22h)      kubelet  Container image "registry.aliyuncs.com/google_containers/coredns:v1.8.6" already present on machine
  Normal   Created    28m (x2 over 22h)      kubelet  Created container coredns
  Normal   Started    28m (x2 over 22h)      kubelet  Started container coredns
  Warning  BackOff    9m29s (x27 over 16m)   kubelet  Back-off restarting failed container
  Warning  Unhealthy  4m32s (x141 over 29m)  kubelet  Readiness probe failed: Get "http://10.244.0.2:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

需要重新寻找解决办法,将之前的pod删掉,dns的pod也还是异常

没办法,将dns的pod删除后,自行拉起,问题才解决

# kubectl  delete pod coredns-6d8c4cb4d-7lswb  -n kube-system
pod "coredns-6d8c4cb4d-7lswb" deleted
# kubectl  delete pod coredns-6d8c4cb4d-84z48   -n kube-system                       
pod "coredns-6d8c4cb4d-84z48" deleted

# kubectl get pod -n kube-system -o wide
NAME                                        READY   STATUS    RESTARTS         AGE     IP             NODE                NOMINATED NODE   READINESS GATES
coredns-6d8c4cb4d-8xghq                     1/1     Running   0                3m48s   10.244.2.186   node2  			  <none>           <none>
coredns-6d8c4cb4d-q65vq                     1/1     Running   0                3m48s   10.244.1.49    node1  			  <none>           <none>

相比较的变化是,master,node1和node2也都是做了

# ifconfig cni0 down
# ip link delete cni0

操作,但只有master没有重新生成cni0,其他两台都自动重新生成,此时,coredns就是运行在node节点上。

master上cni0为何没有重新生成成功,还未去找原因

Kubernetes 集群中,当 CNI 网络插件(如 Flannel、Calico 等)尝试设置 Pod 网络时,可能会遇到如下错误: ``` failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24 ``` 该错误通常发生在节点上已有 `cni0` 网桥接口,并且该接口已经被分配了一个与当前 CNI 插件配置的子网(如 `10.244.0.0/16`)不一致的 IP 地址。 ### 问题原因 1. **残留的网络配置**:节点上可能存在之前运行的 CNI 插件或容器网络遗留的 `cni0` 网桥配置,导致新配置无法应用。 2. **IP 地址冲突**:CNI 插件期望为 `cni0` 分配特定子网中的 IP(如 `10.244.1.1/24`),但现有接口已有不同子网的地址。 3. **Flannel 子网缓存问题**:Flannel 使用 `subnet.env` 文件记录当前节点的子网信息,若该文件残留或不一致,可能导致 IP 分配冲突[^2]。 ### 解决方案 1. **删除并重建 `cni0` 网桥接口** 停止并删除当前的 `cni0` 接口,使其在重启后自动重建为正确的 IP 地址: ```bash sudo ifconfig cni0 down sudo ip link delete cni0 ``` 删除后,Flannel 或其他 CNI 插件会在重启后根据配置重新创建 `cni0` 接口及其 IP 地址[^4]。 2. **清理 Flannel 状态(如使用 Flannel)** 删除 Flannel 的子网缓存文件以确保子网重新分配: ```bash sudo rm /run/flannel/subnet.env ``` 然后重启 Flannel 或节点上的 kubelet 服务以重新初始化网络配置[^2]。 3. **清理 IPVS 规则(如使用 kube-proxy 的 IPVS 模式)** 若节点上使用了 kube-proxy 的 IPVS 模式,残留的 IPVS 规则可能影响网络初始化: ```bash sudo ipvsadm --clear ``` 然后重启 kube-proxy 或相关 Pod: ```bash kubectl get pod -n kube-system | grep kube-proxy | awk &#39;{system("kubectl delete pod "$1" -n kube-system")}&#39; ``` 4. **重启节点(如问题仍未解决)** 如果上述操作未能解决问题,可能是由于内核网络命名空间或缓存未正确清理,建议重启节点服务器以彻底清除网络状态[^2]。 ### 验证操作 完成上述步骤后,检查节点上的 `cni0` 接口是否已正确配置: ```bash ip addr show cni0 ``` 确认其 IP 地址是否符合 CNI 插件的子网配置(如 `10.244.x.x/24`)。 此外,检查节点上 CNI 插件(如 Flannel)的状态是否正常: ```bash kubectl get pods -n kube-system -l k8s-app=flannel ``` 确保 Pod 状态为 `Running`,无重启或错误状态。 ---
评论 1
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值