一.传统的bond方式
(1)bond几种主要模式介绍
ümode 0
load balancing (round-robin)模式,需要交换机端支持,支持多端口负载均衡,支持端口冗余,slave接口的mac相同
ümode 1
active-backup模式,最大支持两个端口,一主一备,同一时间只有一块网卡工作,不支持抢占
ümode 4
采用IEEE802.3ad方式的动态协商机制聚合端口,需要交换机开启lacp,并且配置为主动(active)模式
ümode5和mode6
类似mode1的主备模式,不常用
(2)bond配置
ü需要关闭NetworkManager服务[[email protected] ~]# systemctl stop NetworkManager
[[email protected] ~]# systemctl disable NetworkManager
ü查看内核是否加载bounding[[email protected] ~]# lsmod|grep bonding
bonding 141566 0
如果没有加载bonding可以通过以下命令加载
modprobe --first-time bonding
ü配置bonding驱动[[email protected] ~]# vi /etc/modprobe.d/bond.conf //没有的话手动创建
[[email protected] ~]# cat /etc/modprobe.d/bond.conf
alias bond0 binding
options bond0 miimon=100 mode=0 //miimon是用来进行链路监测的,后面指定的是检查的间隔时间,单位是ms
ü配置bond接口cat /etc/sysconfig/network-scripts/ifcfg-bond0
TYPE=Bond
BOOTPROTO=none
ONBOOT=yes
USERCTL=no //USERCTL:是否允许普通用户控制此设备
DEVICE=bond0
IPADDR=192.168.0.221
PREFIX=24
NM_CONTROLLED=no //NetworkManger服务的参数,配置修改后无重启立即生效
BONDING_MASTER=yes
ü配置slave接口cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
NAME=eth0
DEVICE=eth0
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
其他slave网卡与此配置相同
ü重启network服务,并检查[[email protected] network-scripts]# systemctl restart network
[[email protected] network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:ff:31:80
Slave queue ID: 0
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:ff:31:8a
Slave queue ID: 0
[[email protected] network-scripts]#
二.NetworkManager服务的nmcli方式
(1)查看网络设备状态[[email protected] ~]# nmcli dev
DEVICE TYPE STATE CONNECTION
eth0 ethernet connected eth0
eth1 ethernet connected Wired connection 1
lo loopback unmanaged --
[[email protected] ~]#
(2)查看网络连接状态[[email protected] ~]# nmcli con sh
NAME UUID TYPE DEVICE
Wired connection 1 d75d7715-1098-353e-bb11-4b718e51ff38 802-3-ethernet eth1
eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 802-3-ethernet eth0
[[email protected] ~]#
(3)创建team0(也就是bond接口)
按照下面的语法,用nmcli命令为网络组接口创建一个连接。
# nmcli con add type team con-name CNAME ifname INAME [config JSON]
CNAME指代连接的名称,INAME是接口名称,JSON(JavaScript Object Notation) 指定所使用的处理器(runner)。JSON语法格式如下:
‘{"runner":{"name":"METHOD"}}‘
METHOD是以下的其中一个:broadcast、activebackup、roundrobin、loadbalance或者lacp。
下面以“roundrobin”为例:[[email protected] ~]# nmcli con add type team con-name team0 ifname team0 config ‘{"runner":{"name":"roundrobin"}}‘
Connection ‘team0‘ (64021ca5-85c3-429d-b930-56802dc0ccc4) successfully added.
[[email protected] ~]#
设置team0的ip,gateway,dns[[email protected] ~]# nmcli con modify team0 ipv4.address "192.168.0.222/16" ipv4.gateway "192.168.0.1"
[[email protected] ~]# nmcli con modify team0 ipv4.dns "223.5.5.5"
[[email protected] ~]#
设置team0的属性为手动(manual)[[email protected] ~]# nmcli con modify team0 ipv4.method manual
添加slave网卡[[email protected] ~]# nmcli con add type team-slave con-name team-port2 ifname eth1 master team0
Connection ‘team-port2‘ (df74a4c7-f8ff-4ae3-b04f-3dd1210598cd) successfully added.
[[email protected] ~]# nmcli con add type team-slave con-name team-port1 ifname eth0 master team0
Connection ‘team-port1‘ (757648c4-114f-439f-b022-5bcf63ae0cb3) successfully added.
[[email protected] ~]#
启动team0网口,并检查[[email protected] ~]# nmcli con up team0
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
[[email protected] ~]# teamdctl team0 sta
setup:
runner: roundrobin
ports:
eth0
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
eth1
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
[[email protected] ~]#
常见故障:
启动team0网口,team0仍旧为down[[email protected] network-scripts]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:4f:fd:82 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.222/16 brd 192.168.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe4f:fd82/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:4f:fd:8c brd ff:ff:ff:ff:ff:ff
inet 192.168.0.160/16 brd 192.168.255.255 scope global dynamic eth1
valid_lft 7191sec preferred_lft 7191sec
inet 192.168.0.159/16 brd 192.168.255.255 scope global secondary dynamic eth1
valid_lft 6300sec preferred_lft 6300sec
inet6 fe80::86d1:12d7:5a7c:2d88/64 scope link
valid_lft forever preferred_lft forever
4: team0: mtu 1500 qdisc noqueue state DOWN
排错:
1.检查网络连接状态,发现team-port1和team-port2以及team0没有连接到网卡设备[[email protected] network-scripts]# nmcli con sh
NAME UUID TYPE DEVICE
eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 802-3-ethernet eth0
eth1 22a287d8-6206-4d10-bdd9-5299b063300e 802-3-ethernet eth1
team-port1 757648c4-114f-439f-b022-5bcf63ae0cb3 802-3-ethernet --
team-port2 df74a4c7-f8ff-4ae3-b04f-3dd1210598cd 802-3-ethernet --
team0 64021ca5-85c3-429d-b930-56802dc0ccc4 team --
2.删除eth0和eth1的连接[[email protected] network-scripts]# nmcli con del eth0 eth1
3再次查看发现team0及slave接口正常连接到设备[[email protected] ~]# nmcli con sh
NAME UUID TYPE DEVICE
team-port1 757648c4-114f-439f-b022-5bcf63ae0cb3 802-3-ethernet eth0
team-port2 df74a4c7-f8ff-4ae3-b04f-3dd1210598cd 802-3-ethernet eth1
team0 64021ca5-85c3-429d-b930-56802dc0ccc4 team team0
4.查看team0接口状态并测试连通性[[email protected] ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast master team0 state UP qlen 1000
link/ether 00:0c:29:4f:fd:82 brd ff:ff:ff:ff:ff:ff
3: eth1: mtu 1500 qdisc pfifo_fast master team0 state UP qlen 1000
link/ether 00:0c:29:4f:fd:82 brd ff:ff:ff:ff:ff:ff
4: team0: mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 00:0c:29:4f:fd:82 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.222/16 brd 192.168.255.255 scope global team0
valid_lft forever preferred_lft forever
inet6 fe80::ac47:e724:cd16:c5ca/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::acce:9394:eafe:57bb/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::e1a2:77fd:6148:c7c6/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
[[email protected] ~]# ping baidu.com
PING baidu.com (111.13.101.208) 56(84) bytes of data.
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=1 ttl=52 time=30.6 ms
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=1 ttl=52 time=30.7 ms (DUP!)
^C
--- baidu.com ping statistics ---
1 packets transmitted, 1 received, +1 duplicates, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 30.684/30.696/30.708/0.012 ms
[[email protected] ~]#
注意测试中出现以下状况是由于交换机端没有做端口聚合配置造成[[email protected] ~]# ping baidu.com
PING baidu.com (111.13.101.208) 56(84) bytes of data.
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=1 ttl=52 time=28.2 ms
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=1 ttl=52 time=28.2 ms (DUP!)
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=2 ttl=52 time=29.2 ms
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=2 ttl=52 time=29.2 ms (DUP!)
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=3 ttl=52 time=29.8 ms
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=3 ttl=52 time=29.9 ms (DUP!)
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=4 ttl=52 time=27.7 ms
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=4 ttl=52 time=27.7 ms (DUP
原文:http://youdong.blog.51cto.com/3562886/1963416