11gRAC修改scanip,vip,public ip

本文详细介绍了如何在Oracle RAC环境中修改IP地址的过程,包括停止数据库和集群资源、修改节点IP配置、调整集群参数等关键步骤。
修改前集群状态:
[grid@rac1 ~]$ crs_stat -t
一、停止数据库和crs及相关资源
[grid@rac1 ~]$ srvctl stop database -d orcl //关闭数据库
[grid@rac1 ~]$ srvctl status database -d orcl //查看数据库状态,orcl为数据库名称
Instance orcl1 is not running on node rac1
Instance orcl2 is not running on node rac2
[grid@rac1~]srvctl disable listener //禁止监听自启动
[grid@rac1 ~]$ srvctl stop listener //关闭监听
[root@rac1~]/u01/app/11.2.0/grid/bin/srvctl disable vip -i "rac1-vip" //禁止各个节点自启动VIP并关闭VIP
[grid@rac1 ~]$ srvctl stop vip -n rac1
[root@rac1~]/u01/app/11.2.0/grid/bin/srvctl disable vip -i "rac2-vip"
[grid@rac1 ~]$ srvctl stop vip -n rac2

[grid@rac1~]$ srvctl disable scan_listener
[grid@rac1 ~]$ srvctl stop scan_listener
[root@rac1 ~]#/u01/app/11.2.0/grid/bin/srvctl disable scan
[grid@rac1 ~]$ srvctl stop scan
[grid@rac2 ~]$ crs_stat -t //查看集群状态
二、将两个节点的crs服务都关闭
[root@rac1 bin]# ./crvctl stop crs //注意:root用户才有权限执行
[root@rac2 bin]# ./crsctl stop crs
或者:
[root@rac1 ~]#/u01/app/11.2.0/grid/bin/crsctl stop crs
[root@rac2 ~]#/u01/app/11.2.0/grid/bin/crsctl stop crs

一定要注意确保crs在所有节点上全部关闭
[root@rac1 bin]# ./crsctl check crs
CRS-4639: Could not contact Oracle High Availability Services
[root@rac1 bin]# ./crsctl check cluster
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Check failed, or completed with errors.
[root@rac1 bin]# ./crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.


[root@rac2 bin]# ./crsctl check crs
CRS-4639: Could not contact Oracle High Availability Services
[root@rac2 bin]# ./crsctl check cluster
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Check failed, or completed with errors.
[root@rac2 bin]# ./crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.
三、开始修改所有节点的/etc/hosts,注意先不要修改priv
1、vim /etc/hosts
此处有个小插曲:由于我以前使用vi编辑hosts文件没有正确退出导致产生.hosts.swp文件,这样的话就打不开hosts文件了,删除,swp文件后继续执行([root@rac1 etc]# rm -rf .hosts.swp)。
原IP:
#public IP name
192.168.1.131 rac1
192.168.1.132 rac2
#peivate IP
192.168.26.131 rac1-priv
192.168.26.132 rac2-priv
#vip
192.168.1.211 rac1-vip
192.168.1.212 rac2-vip
#scan
192.168.1.99 scan

修改后:
#public IP name
192.168.1.221 rac1
192.168.1.222 rac2
#peivate IP
192.168.26.135 rac1-priv
192.168.26.136 rac2-priv
#vip
192.168.1.89 rac1-vip
192.168.1.90 rac2-vip
#scan
192.168.1.98 scan
2、修改rac1及rac2网卡配置
注意只是修改public对应的eth0,不修改心跳网卡
注意网关和DNS也需要正确
rac1:vi /etc/sysconfig/network-scripts/ifcfg-eth0 196.168.1.221
rac2:vi /etc/sysconfig/network-scripts/ifcfg-eth0 196.168.1.222
重启网卡,并确保双方的public地址能用hosts直接ping通。


四、 启动crs:
[root@rac1 ~]#/u01/app/11.2.0/grid/bin/crsctl start  crs
[root@rac2 ~]#/u01/app/11.2.0/grid/bin/crsctl start  crs
或者:
[root@rac1 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@rac1 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
(此时的状态为正在连接,需要等待到全部为online状态)
[root@rac2 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@rac2 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4534: Cannot communicate with Event Manager

 集群的public IP 设置(如果属于同一网段,可不修改)----###此步骤未测试
start crs后,请稍等一会儿再执行
[root@rac1bin]# ./oifcfg getif      //查看下集群的VIP
eth0  19.16.0.0 global  public
eth1  5.1.1.0 global  cluster_interconnect
[root@rac1bin]# ./oifcfg delif -global eth0
[root@rac1bin]# ./oifcfg setif -global eth0/19.16.5.0:public
[root@rac1/2bin]# ./oifcfg getif    //两边查看下集群的VIP
这里是举例子,像我这次修改没有更改字段。所以我没有操作这一步。
如果需要修改private ip 则需要先用
[root@rac1bin]# ./oifcfg delif -global eth1
[root@rac1bin]#./oifcfg setif -global eth1/x.x.x.0:cluster_interconnect(注意,我前面修改网卡可没有修改private,private是要在这步先修改后,才可以修改的)  


(以下彩色部分楼主没有测试)
 集群的public IP 设置(如果属于同一网段,可不修改)----###此步骤未测试
start crs后,请稍等一会儿再执行
[root@rac1bin]# ./oifcfg getif      //查看下集群的VIP
eth0  19.16.0.0 global  public
eth1  5.1.1.0 global  cluster_interconnect
[root@rac1bin]# ./oifcfg delif -global eth0
[root@rac1bin]# ./oifcfg setif -global eth0/19.16.5.0:public
[root@rac1/2bin]# ./oifcfg getif    //两边查看下集群的VIP
这里是举例子,像我这次修改没有更改字段。所以我没有操作这一步。
如果需要修改private ip 则需要先用
[root@rac1bin]# ./oifcfg delif -global eth1
[root@rac1bin]#./oifcfg setif -global eth1/x.x.x.0:cluster_interconnect(注意,我前面修改网卡可没有修改private,private是要在这步先修改后,才可以修改的

大成哥的具体步骤:
使用oifcfg修改Public地址:
[grid@rac1 ~]$ oifcfg getif
eth0 192.168.137.0 global public
eth1 192.168.16.0 global cluster_interconnect
看到现在的global public网段值还是原来的,我们要修改成为新的
[grid@rac1 ~]$ oifcfg setif -global eth0/192.168.115.0:public
[grid@rac1 ~]$ oifcfg getif
eth1 192.168.16.0 global cluster_interconnect
eth0 192.168.115.0 global public
注意在一个节点上修改就行了,其他节点会自动同步
[grid@rac2 ~]$ oifcfg getif
eth1 192.168.16.0 global cluster_interconnect
eth0 192.168.115.0 global public


使用srvctl修改vip
[grid@rac1 ~]$ srvctl config vip -n rac1
VIP exists: /rac1-vip/192.168.115.110/192.168.137.0/255.255.255.0/eth0, hosting node rac1
[grid@rac1 ~]$ srvctl config vip -n rac2
VIP exists: /rac2-vip/192.168.115.111/192.168.137.0/255.255.255.0/eth0, hosting node rac2
注意这里的网段还是原来的,需要把网段修改过来:
[grid@rac1 ~]$ srvctl stop vip -n rac1
[grid@rac1 ~]$ srvctl stop vip -n rac2



[root@rac1 bin]# ./srvctl modify nodeapps -n rac1 -A 192.168.115.110/255.255.255.0/eth0
[root@rac1 bin]# ./srvctl modify nodeapps -n rac2 -A 192.168.115.111/255.255.255.0/eth0


五、集群的VIP设置
[root@rac1 bin]# ./srvctl config vip -n rac1 //查看集群VIP
VIP exists: /rac1-vip/192.168.1.89/192.168.1.0/255.255.255.0/eth0, hosting node rac1
[root@rac1 bin]# ./srvctl modify nodeapps -A 192.168.1.89/255.255.255.0/eth0 -n rac1
[root@rac1 bin]# ./srvctl modify nodeapps -A 192.168.1.90/255.255.255.0/eth0 -n rac2
修改VIP资源必须使用srvctl工具手动更新OCR中的VIP配置。如果客户端使用VIP连接数据库,那么还需要修改客户端hosts文件等相关的VIP配置。
[root@rac1 bin]# ./srvctl modify scan -n 192.168.1.98 (注意这里直接写ip的话VIP的name会改变!)
[root@rac1 bin]# ./srvctl enable listener
[root@rac1 bin]# ./srvctl config scan
SCAN name: 192.168.1.98, Network: 1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /192.168.1.98/192.168.1.98
[root@rac1 bin]# ./srvctl modify scan -n scan(改为scan的名称)
[root@rac1 bin]# ./srvctl config scan
SCAN name: scan, Network: 1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /scan/192.168.1.98

如果需要修改private   ------#此步骤未测试
(1)rac1:vi /etc/sysconfig/network-scripts/ifcfg-eth0
      修改ip为15.1.1.1掩码255.255.255.0
      service network restart//重启网卡
(2)rac2:vi/etc/sysconfig/network-scripts/ifcfg-eth0
      修改ip为15.1.1.2掩码 255.255.255.0
      service network restart//重启网卡

大成哥的具体步骤:
使用oifcfg修改priv
[root@rac1 bin]# ./oifcfg getif
eth1 192.168.16.0 global cluster_interconnect
eth0 192.168.115.0 global public
[root@rac1 bin]# ./olsnodes -s
rac1 Active
rac2 Active
[root@rac1 bin]# ./oifcfg setif -global eth1/192.168.145.0:cluster_interconnect
[root@rac1 bin]# ./oifcfg getif
eth1 192.168.16.0 global cluster_interconnect
eth0 192.168.115.0 global public
eth1 192.168.145.0 global cluster_interconnect
[root@rac1 bin]# ./oifcfg delif -global eth1/192.168.16.0:cluster_interconnect
[root@rac1 bin]# ./oifcfg getif
eth0 192.168.115.0 global public
eth1 192.168.145.0 global cluster_interconnect

[root@rac2 bin]# ./oifcfg getif
eth0 192.168.115.0 global public
eth1 192.168.145.0 global cluster_interconnect
注意所有节点是不是都配置好了

停止crs
[root@rac1 bin]# ./crsctl stop crs
配置/etc/hosts
配置网卡后重启网卡
配置完之后ping主机名,再次检测hosts文件是否生效
没问题之后重启crsd       [root@rac1 bin]# ./crsctl start crs
启动VIP、监听、scan和scan_listener、数据库
[root@rac1 bin]#./srvctl enable listener      
[root@rac1 bin]#./srvctl enable vip -i "rac1-vip"
[root@rac1 bin]#./srvctl enable vip -i "rac2-vip"
[root@rac1 bin]#./srvctl enable scan_listener
[root@rac1 bin]#./srvctl enable scan
[root@rac1 bin]#./srvctl enable database -d hyw
[root@rac1 bin]#./srvctl start listener      
[root@rac1 bin]#./srvctl start vip -n rac1
[root@rac1 bin]#./srvctl start vip -n rac2
[root@rac1 bin]#./srvctl start scan_listener
[root@rac1 bin]#./srvctl start scan
[root@rac1 bin]#./srvctl start database -d hyw
或者:
[root@rac1 bin]# ./srvctl start listener
[root@rac1 bin]# ./srvctl start vip -n rac1
PRKO-2420 : VIP is already started on node(s): rac1
[root@rac1 bin]# ./srvctl start vip -n rac2
PRKO-2420 : VIP is already started on node(s): rac2
[root@rac1 bin]# ./srvctl start database -d orcl
七、检查VIP资源状态
[root@rac1 bin]# ./srvctl status nodeapps
VIP rac1-vip is enabled                  
VIP rac1-vip is running on node: rac1    
VIP rac2-vip is enabled                  
VIP rac2-vip is running on node: rac2    
Network is enabled                       
Network is running on node: rac1         
Network is running on node: rac2         
GSD is disabled                          
GSD is not running on node: rac1         
GSD is not running on node: rac2         
ONS is enabled                           
ONS daemon is running on node: rac1      
ONS daemon is running on node: rac2    

修改后的各个IP都可以ping同
OK,大功告成!

                                           
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值