Cluster setup with Aggregates, VLAN's, and IPMP

本文详细阐述了在面对复杂网络配置需求时,如何通过聚合、VLAN划分、IPMP等技术来确保网络稳定性和冗余性,具体包括使用双Quad-Gig Intel网卡进行聚合,创建VLAN接口,配置IPMP实现故障切换,并提供了从网络管理系统(NMS)到手动编辑主机名文件的全过程指导。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Configuring clusters is complicated.  It is even more complicated when you throw in complex network configurations demanded by some customers.  This particular config has pretty much everything in it.  Aggregation, VLAN's and IPMP.

   We've got 2 quad-gig Intel cards.  So that puts e1000g0-g3 on one card and e1000g4-g7 on the other.  Aggregations aren't declared dead until all links are down so to protect agains a complete card failure and a switch failure, we'll have each card aggregate it's ports to a single switch instead of between cards.  So we'll create aggr0 on one card and aggr1 on the other.
    On top of the aggregates, we need to create vlan interfaces.  We'll use VLAN ID is 2601 in this example.  This will createaggr2601000 which will ride on aggr0 and aggr2601001 which will be on aggr1.
    We then use IPMP between aggr2601000 and aggr2601001.  We will configure link-based IPMP which will only switch interfaces if the link goes down.  If we wanted probe-based, in which the system will actively ping the local router to determine network functionality, we would put test addresses on aggr2601000 and aggr2601001.

    From NMS:

setup network aggregation create
    - Select e1000g0 & e1000g1
    - Select "off", "passive", or "active" depending on what the customer's network requires
   - call it aggr0
setup network aggregation create
    - select e1000g4 & e1000g5
    - Select same LACP as aggr0
    - call it aggr1
setup network interface vlan aggr0
    - VLAN ID 2601
setup network interface vlan aggr1
    -VLAN ID 2601

    From there we need to drop to bash and edit the /etc/hostname.* files.
    For IPMP, we should create this according to Clearview standards.  Clearview information can be found at http://hub.opensolaris.org/bin/view/Project+clearview/ipmp

    First, we configure the underlying interfaces for our group.  We're going to call this the data0 group.  We'll give the underlying interfaces "test" addresses, which are used to test functionality of the network.  Typically these will ping the local routers.


/etc/hostname.aggr2601000:
    group data0 192.168.0.111 -failover

/etc/hostname.aggr2601001:
    group data0 192.168.0.112 -failover

    Second, we create the IPMP interface.  We're going to give this a vanity name that descriptive of what it is to do.  In our case, datanet0.

/etc/hostname.datanet0    ipmp group data0    192.168.10.110 netmask 255.255.255.0 broadcast + up

    NMS can't tell RSF-1 to use datanet0 (and doesn't handle IPMP very well in the first place) as the interface when we configure a shared volume via NMS.  So just configure the volume to one of the underlying aggregates and then we'll manually edit the config file.

setup group rsf-cluster <clustername> shared-volume add

    Now edit /opt/HAC/RSF-1/etc/config and under the SERVICE section for each shared volume change theIPDEVICE line from:

IPDEVICE "aggr2601000"

to

IPDEVICE "datanet0"

    Distribute that file to the other node.
    Then re-start the cluster services or simply reboot both nodes.
    Test failover between them and we should be good to go.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值