1 实验环境
操作系统: CentOS Linux 5 (Kernel :2.6.18 -53.el5 )
网卡说明: 双 网卡(非同品牌型号)名字分别为eth0 、eth1
网络设置: 192.168.1.201/24
2 配置方法
2.1 相 关文件
/etc/sysconfig/network-scripts/ifcfg-bond0
/etc/sysconfig/network-scripts/ifcfg-eth0
/etc/sysconfig/network-scripts/ifcfg-eth1
/etc/modprobe.conf
/etc/rc.local
/proc/net/bonding/bond0
2.2 开 始配置
1. 备份eth0 、eth1 配置文件
# cd /etc/sysconfig/network-scripts
# cp ifcfg-eth0 bak.ifcfg-eth0
# cp ifcfg-eth1 bak.ifcfg-eth1
2. 建立ifcfg-bond0
# vi ifcfg-bond0
加入如下 内容:
DEVICE=bond0
ONBOOT=yes
BOOTPROTO=none
BROADCAST=192.168.1.255
IPADDR=192.168.1.201
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
3. 编辑ifcfg-eth0
# vi ifcfg-eth0
加入如下 内容覆盖原有内容:
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
HWADDR=00:16:EC:AE:07:54
4. 编辑ifcfg-eth1
# vi ifcfg-eth1
加入如下 内容覆盖原有内容:
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
HWADDR= 00:50:BA:0C:65:02
5. 编辑modprobe.conf
# cd /et c
# vi modprobe.conf
追加如下 内容:
alias bond0 bonding
options bond0 miimon=100 mode=1
说明:
miimon 表 示链路检查间隔,单位为毫秒;mode 的值表示工作模式,他共有0 、1 、2 、3 、4 、5 、6 七种模式,
0 模式: 负载均衡模式,RR 方式,全负载均 衡,需要Switch 侧做Trunk ,可失 效一网卡;
1 模式: 热备模式,不需要Switch 侧支 持;
2 模式: 负载均衡模式,XOR 方式,根据来源MAC ;
3 模式: 广播模式,所有网卡一起收发数据包,可失效一网卡;
4 模式: 802.3ad 模式,需要Switch 支持802.3ad Dynamic Link Aggregation ;
5 模式: 负载均衡模式,TLB 方式,半负载均 衡,发送为负载均衡,接收为动态分配,不需要Switch 侧支持,可失效一网卡;
6 模式: 负载均衡模式,ALB 方式,全负载均 衡,网卡需支持动态更改MAC ,不需要Switch 侧 支持,可失效一网卡;
常用的为0,1 两种。
6. 编辑rc.local
# vi rc.local
追加如下 内容:
ifenslave bond0 eth0 eth1
route add -net 192.168.1.0 netmask 255.255.255.0 bond0
3 结果测试
重启服务器,在开机加载服务有如下提示 输出则说明模块成功加载。
Bringing up interface bond0 [ OK ]
Bringing up interface eth0 [ OK ]
Bringing up interface eth1 [ OK ]
3.1 查 看IP 配置
输出信息应该与下方的输出类似
# ifconfig -a
bond0 Link encap:Ethernet HWaddr 00:16:EC:AE:07:54
inet addr:192.168.1.201 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::216:ecff:feae:754/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:10452933 errors:0 dropped:0 overruns:0 frame.:0
TX packets:2245803 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2313966546 (2.1 GiB) TX bytes:1240415537 (1.1 GiB)
eth0 Link encap:Ethernet HWaddr 00:16:EC:AE:07:54
inet6 addr: fe80::216:ecff:feae:754/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:10353385 errors:0 dropped:0 overruns:0 frame.:0
TX packets:2245519 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2305237371 (2.1 GiB) TX bytes:1240369619 (1.1 GiB)
Interrupt:225 Base address:0xc800
eth1 Link encap:Ethernet HWaddr 00:16:EC:AE:07:54
inet6 addr: fe80::216:ecff:feae:754/64 Scope:Link
UP BROADCAST SLAVE MULTICAST MTU:1500 Metric:1
RX packets:99554 errors:0 dropped:0 overruns:0 frame.:0
TX packets:291 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8729599 (8.3 MiB) TX bytes:47044 (45.9 KiB)
Interrupt:217 Base address:0xac00
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:12133 errors:0 dropped:0 overruns:0 frame.:0
TX packets:12133 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:18922380 (18.0 MiB) TX bytes:18922380 (18.0 MiB)
3.2 网 关连通状态
# ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=255 time=0.608 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=255 time=0.593 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=255 time=0.800 ms
64 bytes from 192.168.1.1: icmp_seq=4 ttl=255 time=0.515 ms
--- 192.168.1.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.515/0.629/0.800/0.104 ms
3.3 查 看bond0 状态
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.1.2 (January 20, 2007)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 1
Permanent HW addr: 00:16:ec:ae:07:54
Slave Interface: eth1
MII Status: down
Link Failure Count: 2
Permanent HW addr: 00:50:ba:0c:65:02
3.4 Bond 迁移状态
在同一网段用一台机器持续ping Bonding 主机,然后拔掉eth0 的网 线,同时使用其他网络接入服务。
# ping 192.168.1.201
Pinging 192.168.1.201 with 32 bytes of data:
Reply from 192.168.1.201: bytes=32 time<1ms TTL=64
Reply from 192.168.1.201: bytes=32 time<1ms TTL=64
Reply from 192.168.1.201: bytes=32 time<1ms TTL=64
Reply from 192.168.1.201: bytes=32 time<1ms TTL=64
Reply from 192.168.1.201: bytes=32 time=3ms TTL=64
Reply from 192.168.1.201: bytes=32 time<1ms TTL=64
Reply from 192.168.1.201: bytes=32 time<1ms TTL=64
Reply from 192.168.1.201: bytes=32 time<1ms TTL=64
Reply from 192.168.1.201: bytes=32 time<1ms TTL=64
Ping statistics for 192.168.1.201:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
#
在ping 的同时开一个SSH 登录到主机,并输入# top ,可以看到中间只是顿了一下就恢复了,并没有断;查看bond0 状 态,发现Currently Active Slave 已经切换到eth1 。
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.1.2 (January 20, 2007)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: down
Link Failure Count: 1
Permanent HW addr: 00:16:ec:ae:07:54
Slave Interface: eth1
MII Status: up
Link Failure Count: 2
Permanent HW addr: 00:50:ba:0c:65:02
eth0 网 线插回去后,bonding 并不会及时的将工作网卡切换到eth0 , 而是等到eth1 链路失效后才会切换回eth0
再拔掉eth1 的网线查看bond0 状态会发现Currently Active Slave 已经切换回eth0
经过测试,实验成功!