oracle ocr votedisk 备份恢复

本文详细介绍了如何检查Oracle集群注册表(OCR)和投票磁盘(votedisk)信息,包括手动备份、模拟磁盘损坏并进行恢复的完整步骤。通过检查OCR和votedisk的状态,备份相关文件,模拟磁盘损坏后重建分区和磁盘,并执行恢复操作,确保了Oracle集群服务的高可用性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

一、检查ocr和votedisk信息

1、查看ocr信息

[root@racnode1 bin]# ocrcheck

Status of Oracle Cluster Registry is asfollows :

        Version                  :          3

        Total space (kbytes)     :     262120

        Used space (kbytes)      :       2808

        Available space (kbytes) :    259312

        ID                       :1160667401

        Device/File Name         :       +OCR

                                    Device/File integrity checksucceeded

        Device/File Name         :     +DATA2

                                    Device/Fileintegrity check succeeded

 

                                    Device/Filenot configured

 

                                    Device/File not configured

 

                                    Device/Filenot configured

 

        Cluster registry integrity check succeeded

 

        Logical corruption check succeeded

2、查看votedisk信息

[root@racnode1 bin]# crsctl query cssvotedisk

## STATE    File Universal Id                File Name Disk group

-- -----    -----------------                --------- ---------

 1.ONLINE   a3a3433cd0ce4ff5bfa0708f4f84f620(ORCL:VOL1) [OCR]

Located 1 voting disk(s).

[root@racnode1 bin]#

[root@racnode1 bin]#

3、查看ocr和votedisk备份信息

[root@racnode1 bin]# ocrconfig -showbackup

 

racnode1    2014/01/10 15:32:49    /u01/app/11.2.0/grid/cdata/racscan/backup00.ocr

 

racnode2    2014/01/06 15:20:23    /u01/app/11.2.0/grid/cdata/racscan/backup01.ocr

 

racnode1    2014/01/03 15:09:08    /u01/app/11.2.0/grid/cdata/racscan/backup02.ocr

 

racnode1    2014/01/10 15:32:49    /u01/app/11.2.0/grid/cdata/racscan/day.ocr

 

racnode1    2014/01/03 15:09:08    /u01/app/11.2.0/grid/cdata/racscan/week.ocr

 

racnode1    2014/01/03 11:31:05    /u01/app/11.2.0/grid/cdata/racscan/backup_20140103_113105.ocr

 

racnode1    2014/01/03 09:59:18    /u01/app/11.2.0/grid/cdata/racscan/backup_20140103_095918.ocr

 

racnode1    2014/01/03 09:55:16     /u01/app/11.2.0/grid/cdata/racscan/backup_20140103_095516.ocr

4、手动备份ocr和votedisk

[root@racnode1 bin]# ocrconfig-manualbackup

 

racnode1    2014/01/10 16:02:42    /u01/app/11.2.0/grid/cdata/racscan/backup_20140110_160242.ocr

 

racnode1    2014/01/03 11:31:05    /u01/app/11.2.0/grid/cdata/racscan/backup_20140103_113105.ocr

 

racnode1    2014/01/03 09:59:18    /u01/app/11.2.0/grid/cdata/racscan/backup_20140103_095918.ocr

 

racnode1    2014/01/03 09:55:16    /u01/app/11.2.0/grid/cdata/racscan/backup_20140103_095516.ocr

 

[root@racnode1 bin]# ocrconfig -showbackup

 

racnode1    2014/01/10 15:32:49    /u01/app/11.2.0/grid/cdata/racscan/backup00.ocr

 

racnode2    2014/01/06 15:20:23    /u01/app/11.2.0/grid/cdata/racscan/backup01.ocr

 

racnode1    2014/01/03 15:09:08    /u01/app/11.2.0/grid/cdata/racscan/backup02.ocr

 

racnode1    2014/01/10 15:32:49    /u01/app/11.2.0/grid/cdata/racscan/day.ocr

 

racnode1    2014/01/03 15:09:08    /u01/app/11.2.0/grid/cdata/racscan/week.ocr

 

racnode1    2014/01/10 16:02:42    /u01/app/11.2.0/grid/cdata/racscan/backup_20140110_160242.ocr

 

racnode1    2014/01/03 11:31:05    /u01/app/11.2.0/grid/cdata/racscan/backup_20140103_113105.ocr

 

racnode1    2014/01/03 09:59:18    /u01/app/11.2.0/grid/cdata/racscan/backup_20140103_095918.ocr

 

racnode1    2014/01/03 09:55:16    /u01/app/11.2.0/grid/cdata/racscan/backup_20140103_095516.ocr

5、备份asmspfile文件

[root@racnode1 bin]#

[root@racnode1 bin]# su - grid

racnode1->

racnode1->

racnode1-> sqlplus /nolog

 

SQL*Plus: Release 11.2.0.1.0 Production onFri Jan 10 16:03:53 2014

 

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

 

SQL> conn / as sysasm

Connected.

SQL>

SQL>

SQL> !                                               

racnode1->

racnode1->

racnode1-> crs_stat

NAME=ora.DATA2.dg

TYPE=ora.diskgroup.type

TARGET=ONLINE

STATE=ONLINE on racnode1

 

NAME=ora.DATEGROUP.dg

TYPE=ora.diskgroup.type

TARGET=OFFLINE

STATE=OFFLINE

 

NAME=ora.FLASHGROUP.dg

TYPE=ora.diskgroup.type

TARGET=OFFLINE

STATE=OFFLINE

 

NAME=ora.LISTENER.lsnr

TYPE=ora.listener.type

TARGET=ONLINE

STATE=ONLINE on racnode1

 

NAME=ora.LISTENER_SCAN1.lsnr

TYPE=ora.scan_listener.type

TARGET=ONLINE

STATE=ONLINE on racnode1

 

NAME=ora.OCR.dg

TYPE=ora.diskgroup.type

TARGET=ONLINE

STATE=ONLINE on racnode1

 

NAME=ora.asm

TYPE=ora.asm.type

TARGET=ONLINE

STATE=ONLINE on racnode1

 

NAME=ora.devdb.db

TYPE=ora.database.type

TARGET=OFFLINE

STATE=OFFLINE

 

NAME=ora.eons

TYPE=ora.eons.type

TARGET=ONLINE

STATE=ONLINE on racnode1

 

NAME=ora.gsd

TYPE=ora.gsd.type

TARGET=OFFLINE

STATE=OFFLINE

 

NAME=ora.net1.network

TYPE=ora.network.type

TARGET=ONLINE

STATE=ONLINE on racnode1

 

NAME=ora.oc4j

TYPE=ora.oc4j.type

TARGET=OFFLINE

STATE=OFFLINE

 

NAME=ora.ons

TYPE=ora.ons.type

TARGET=ONLINE

STATE=ONLINE on racnode1

 

NAME=ora.racnode1.ASM1.asm

TYPE=application

TARGET=ONLINE

STATE=ONLINE on racnode1

 

NAME=ora.racnode1.LISTENER_RACNODE1.lsnr

TYPE=application

TARGET=ONLINE

STATE=ONLINE on racnode1

 

NAME=ora.racnode1.gsd

TYPE=application

TARGET=OFFLINE

STATE=OFFLINE

 

NAME=ora.racnode1.ons

TYPE=application

TARGET=ONLINE

STATE=ONLINE on racnode1

 

NAME=ora.racnode1.vip

TYPE=ora.cluster_vip_net1.type

TARGET=ONLINE

STATE=ONLINE on racnode1

 

NAME=ora.racnode2.ASM2.asm

TYPE=application

TARGET=ONLINE

STATE=ONLINE on racnode2

 

NAME=ora.racnode2.LISTENER_RACNODE2.lsnr

TYPE=application

TARGET=ONLINE

STATE=ONLINE on racnode2

 

NAME=ora.racnode2.gsd

TYPE=application

TARGET=OFFLINE

STATE=OFFLINE

 

NAME=ora.racnode2.ons

TYPE=application

TARGET=ONLINE

STATE=ONLINE on racnode2

 

NAME=ora.racnode2.vip

TYPE=ora.cluster_vip_net1.type

TARGET=ONLINE

STATE=ONLINE on racnode2

 

NAME=ora.registry.acfs

TYPE=ora.registry.acfs.type

TARGET=ONLINE

STATE=ONLINE on racnode1

 

NAME=ora.scan1.vip

TYPE=ora.scan_vip.type

TARGET=ONLINE

STATE=ONLINE on racnode1

 

racnode1-> crsctl start resourceora.DATEGROUP.dg

CRS-2672: Attempting to start'ora.DATEGROUP.dg' on 'racnode2'

CRS-2672: Attempting to start'ora.DATEGROUP.dg' on 'racnode1'

CRS-2676: Start of 'ora.DATEGROUP.dg' on'racnode2' succeeded

CRS-2676: Start of 'ora.DATEGROUP.dg' on'racnode1' succeeded

racnode1-> crsctl start resourceora.FLASHGROUP.dg

CRS-2672: Attempting to start'ora.FLASHGROUP.dg' on 'racnode2'

CRS-2672: Attempting to start'ora.FLASHGROUP.dg' on 'racnode1'

CRS-2676: Start of 'ora.FLASHGROUP.dg' on'racnode2' succeeded

CRS-2676: Start of 'ora.FLASHGROUP.dg' on'racnode1' succeeded

racnode1->

racnode1->

racnode1->

racnode1-> exit

exit

 

SQL>

SQL> createpfile='/tmp/asmpfile0110.ora' from spfile;

 

File created.

 

SQL>

SQL> exit

Disconnected from Oracle Database 11gEnterprise Edition Release 11.2.0.1.0 - 64bit Production

With the Real Application Clusters andAutomatic Storage Management options

racnode1->

racnode1->

racnode1-> exit

logout

二、模拟ocr磁盘损坏

[root@racnode1 bin]# /usr/sbin/oracleasmlistdisks

DISK01

DISK02

DISK03

VOL1

VOL2

VOL3

VOL4

VOL5

[root@racnode1 bin]# dd if=/dev/zeroof=/dev/sdb

 

[1]+ Stopped                 ddif=/dev/zero of=/dev/sdb

1、格式化磁盘

[root@racnode1 bin]# dd if=/dev/zeroof=/dev/sdb bs=1024 count=1000

1000+0 records in

1000+0 records out

1024000 bytes (1.0 MB) copied, 0.097034seconds, 10.6 MB/s

[root@racnode1 bin]# fdisk -l

 

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sda1  *           1        2304   18506848+  83  Linux

/dev/sda2            2305        2610    2457945   82  Linux swap / Solaris

 

Disk /dev/sdb: 3221 MB, 3221225472 bytes

255 heads, 63 sectors/track, 391 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

Disk /dev/sdb doesn't contain a validpartition table

 

Disk /dev/sdc: 3221 MB, 3221225472 bytes

255 heads, 63 sectors/track, 391 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sdc1               1         391    3140676   83  Linux

 

Disk /dev/sdd: 3221 MB, 3221225472 bytes

255 heads, 63 sectors/track, 391 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sdd1               1         391    3140676   83  Linux

 

Disk /dev/sde: 3221 MB, 3221225472 bytes

255 heads, 63 sectors/track, 391 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sde1               1         391    3140676   83  Linux

 

Disk /dev/sdf: 3221 MB, 3221225472 bytes

255 heads, 63 sectors/track, 391 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sdf1               1         391    3140676   83  Linux

 

Disk /dev/sdg: 1073 MB, 1073741824 bytes

255 heads, 63 sectors/track, 130 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sdg1               1         130    1044193+  83  Linux

 

Disk /dev/sdh: 1073 MB, 1073741824 bytes

255 heads, 63 sectors/track, 130 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sdh1               1         130    1044193+  83  Linux

 

Disk /dev/sdi: 1073 MB, 1073741824 bytes

255 heads, 63 sectors/track, 130 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sdi1               1         130    1044193+  83  Linux

2、重建分区

[root@racnode1 bin]# fdisk /dev/sdb

Device contains neither a valid DOSpartition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes willremain in memory only,

until you decide to write them. After that,of course, the previous

content won't be recoverable.

 

Warning: invalid flag 0x0000 of partitiontable 4 will be corrected by w(rite)

 

Command (m for help): m

Command action

  a   toggle a bootable flag

  b   edit bsd disklabel

  c   toggle the dos compatibilityflag

  d   delete a partition

  l   list known partition types

  m   print this menu

  n   add a new partition

  o   create a new empty DOSpartition table

  p   print the partition table

  q   quit without saving changes

  s   create a new empty Sundisklabel

  t   change a partition's system id

  u   change display/entry units

  v   verify the partition table

  w   write table to disk and exit

  x   extra functionality (expertsonly)

 

Command (m for help): n

Command action

  e   extended

  p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-391, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK(1-391, default 391):

Using default value 391

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

 

WARNING: Re-reading the partition table failedwith error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the nextreboot.

Syncing disks.

[root@racnode1 bin]# fdisk -l

 

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sda1  *           1        2304   18506848+  83  Linux

/dev/sda2            2305        2610    2457945   82  Linux swap / Solaris

 

Disk /dev/sdb: 3221 MB, 3221225472 bytes

255 heads, 63 sectors/track, 391 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sdb1               1         391    3140676   83  Linux

 

Disk /dev/sdc: 3221 MB, 3221225472 bytes

255 heads, 63 sectors/track, 391 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sdc1               1         391    3140676   83  Linux

 

Disk /dev/sdd: 3221 MB, 3221225472 bytes

255 heads, 63 sectors/track, 391 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sdd1               1         391    3140676   83  Linux

 

Disk /dev/sde: 3221 MB, 3221225472 bytes

255 heads, 63 sectors/track, 391 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sde1               1         391    3140676   83  Linux

 

Disk /dev/sdf: 3221 MB, 3221225472 bytes

255 heads, 63 sectors/track, 391 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sdf1               1         391    3140676   83 Linux

 

Disk /dev/sdg: 1073 MB, 1073741824 bytes

255 heads, 63 sectors/track, 130 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sdg1               1         130    1044193+  83  Linux

 

Disk /dev/sdh: 1073 MB, 1073741824 bytes

255 heads, 63 sectors/track, 130 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sdh1               1         130    1044193+  83  Linux

 

Disk /dev/sdi: 1073 MB, 1073741824 bytes

255 heads, 63 sectors/track, 130 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

 

  Device Boot      Start         End      Blocks  Id  System

/dev/sdi1               1         130    1044193+  83  Linux

[root@racnode1 bin]#

3、重建disk 磁盘

3、1扫描disk磁盘

[root@racnode1 bin]# /usr/sbin/oracleasmscandisks

Reloading disk partitions: done

Cleaning any stale ASM disks...

Cleaning disk "VOL1"

Scanning system for ASM disks...

3、2  display disk磁盘

[root@racnode1 bin]# /usr/sbin/oracleasmlistdisks

DISK01

DISK02

DISK03

VOL2

VOL3

VOL4

VOL5

3、3 重建disk磁盘

[root@racnode1 bin]# /usr/sbin/oracleasmcreatedisk VOL1 /dev/sdb1

Unable to open device "/dev/sdb1":Device or resource busy

此处报错,由于asm磁盘还在使用,不能重建

3、3、1 停止crs服务

 [root@racnode1bin]# crsctl stop crs -f

CRS-2791: Starting shutdown of Oracle HighAvailability Services-managed resources on 'racnode1'

CRS-2673: Attempting to stop 'ora.crsd' on'racnode1'

CRS-2790: Starting shutdown of ClusterReady Services-managed resources on 'racnode1'

CRS-2673: Attempting to stop'ora.racnode1.vip' on 'racnode1'

CRS-2673: Attempting to stop'ora.LISTENER_SCAN1.lsnr' on 'racnode1'

CRS-2673: Attempting to stop'ora.LISTENER.lsnr' on 'racnode1'

CRS-2673: Attempting to stop 'ora.DATA2.dg'on 'racnode1'

CRS-2673: Attempting to stop 'ora.OCR.dg'on 'racnode1'

CRS-2673: Attempting to stop'ora.registry.acfs' on 'racnode1'

CRS-2673: Attempting to stop'ora.DATEGROUP.dg' on 'racnode1'

CRS-2673: Attempting to stop 'ora.FLASHGROUP.dg'on 'racnode1'

CRS-2677: Stop of 'ora.racnode1.vip' on'racnode1' succeeded

CRS-2677: Stop of 'ora.LISTENER.lsnr' on'racnode1' succeeded

CRS-2673: Attempting to stop'ora.racnode2.vip' on 'racnode1'

CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr'on 'racnode1' succeeded

CRS-2673: Attempting to stop'ora.scan1.vip' on 'racnode1'

CRS-2677: Stop of 'ora.racnode2.vip' on'racnode1' succeeded

CRS-2677: Stop of 'ora.scan1.vip' on'racnode1' succeeded

CRS-2677: Stop of 'ora.registry.acfs' on'racnode1' succeeded

CRS-2677: Stop of 'ora.DATA2.dg' on'racnode1' succeeded

CRS-2677: Stop of 'ora.OCR.dg' on'racnode1' succeeded

ORA-00600: internal error code, arguments:[ksprcvsp0], [0], [0], [], [], [], [], [], [], [], [], []

CRS-2677: Stop of 'ora.DATEGROUP.dg' on'racnode1' succeeded

CRS-2679: Attempting to clean'ora.DATEGROUP.dg' on 'racnode1'

ORA-00600: internal error code, arguments:[ksprcvsp0], [0], [0], [], [], [], [], [], [], [], [], []

CRS-2677: Stop of 'ora.FLASHGROUP.dg' on'racnode1' succeeded

CRS-2679: Attempting to clean'ora.FLASHGROUP.dg' on 'racnode1'

CRS-2681: Clean of 'ora.FLASHGROUP.dg' on'racnode1' succeeded

CRS-2681: Clean of 'ora.DATEGROUP.dg' on'racnode1' succeeded

CRS-2673: Attempting to stop 'ora.asm' on'racnode1'

CRS-2677: Stop of 'ora.asm' on 'racnode1'succeeded

CRS-2673: Attempting to stop 'ora.ons' on'racnode1'

CRS-2673: Attempting to stop 'ora.eons' on'racnode1'

CRS-2677: Stop of 'ora.ons' on 'racnode1'succeeded

CRS-2673: Attempting to stop 'ora.net1.network'on 'racnode1'

CRS-2677: Stop of 'ora.net1.network' on'racnode1' succeeded

CRS-2677: Stop of 'ora.eons' on 'racnode1'succeeded

CRS-2792: Shutdown of Cluster ReadyServices-managed resources on 'racnode1' has completed

CRS-2677: Stop of 'ora.crsd' on 'racnode1'succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on'racnode1'

CRS-2673: Attempting to stop'ora.drivers.acfs' on 'racnode1'

CRS-2673: Attempting to stop 'ora.gpnpd' on'racnode1'

CRS-2673: Attempting to stop'ora.cssdmonitor' on 'racnode1'

CRS-2673: Attempting to stop 'ora.ctssd' on'racnode1'

CRS-2673: Attempting to stop 'ora.evmd' on'racnode1'

CRS-2673: Attempting to stop 'ora.asm' on'racnode1'

CRS-2677: Stop of 'ora.cssdmonitor' on'racnode1' succeeded

CRS-2677: Stop of 'ora.evmd' on 'racnode1'succeeded

CRS-2677: Stop of 'ora.gpnpd' on 'racnode1'succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'racnode1'succeeded

CRS-2677: Stop of 'ora.ctssd' on 'racnode1'succeeded

 

[2]+ Stopped                 crsctlstop crs –f

停止服务到该处时停止不动,查看日志显示:

[client(26981)]CRS-10001:Waiting for ASM toshutdown.

[client(26992)]CRS-10001:Waiting for ASM toshutdown.

[client(27003)]CRS-10001:Waiting for ASM toshutdown.

[client(27012)]CRS-10001:Waiting for ASM toshutdown.

[client(27021)]CRS-10001:Waiting for ASM toshutdown.

[client(27033)]CRS-10001:Waiting for ASM toshutdown.

[client(27042)]CRS-10001:Waiting for ASM toshutdown.

[client(27051)]CRS-10001:Waiting for ASM toshutdown.

[client(27062)]CRS-10001:Waiting for ASM toshutdown.

由此可知是asm磁盘无法shutdown,只能kill进程。

3、3、2  kill进程
3、3、2、1  kill ohas

[root@racnode1 bin]# ps -ef | grep ohas

root     3052     1  0 09:18 ?        00:00:00 /bin/sh /etc/init.d/init.ohasdrun

root    14129     1  0 11:16 ?        00:01:09/u01/app/11.2.0/grid/bin/ohasd.bin reboot

root    26965  4264  0 16:28 pts/1    00:00:00 grep ohas

[root@racnode1 bin]# kill -9 14129

[root@racnode1 bin]# ps -ef | grep ohas

root     3052     1  0 09:18 ?        00:00:00 /bin/sh /etc/init.d/init.ohasdrun

root    26976  4264  0 16:28 pts/1    00:00:00 grep ohas

[root@racnode1 bin]# ps -ef | grep asm

grid    14577     1  0 11:17 ?        00:00:01 asm_pmon_+ASM1

grid    14579     1  0 11:17 ?        00:00:13 asm_vktm_+ASM1

grid    14583     1  0 11:17 ?        00:00:00 asm_gen0_+ASM1

grid    14585     1  011:17 ?        00:00:06 asm_diag_+ASM1

grid    14587     1  0 11:17 ?        00:00:02 asm_ping_+ASM1

grid    14589     1  0 11:17 ?        00:00:00 asm_psp0_+ASM1

grid    14591     1  0 11:17 ?        00:00:29 asm_dia0_+ASM1

grid    14593     1  0 11:17 ?        00:00:16 asm_lmon_+ASM1

grid    14595     1  0 11:17 ?        00:00:14 asm_lmd0_+ASM1

grid    14606     1  0 11:17 ?        00:00:07 asm_lms0_+ASM1

grid    14610     1  0 11:17 ?        00:00:00 asm_lmhb_+ASM1

grid    14612     1  011:17 ?        00:00:00 asm_mman_+ASM1

grid    14614     1  0 11:17 ?        00:00:00 asm_dbw0_+ASM1

grid    14616     1  0 11:17 ?        00:00:00 asm_lgwr_+ASM1

grid    14618     1  0 11:17 ?        00:00:00 asm_ckpt_+ASM1

grid    14620     1  0 11:17 ?        00:00:00 asm_smon_+ASM1

grid    14622     1  0 11:17 ?        00:00:03 asm_rbal_+ASM1

grid    14624     1  0 11:17 ?        00:00:01 asm_gmon_+ASM1

grid    14626     1  0 11:17 ?        00:00:01 asm_mmon_+ASM1

grid    14628     1  0 11:17 ?        00:00:02 asm_mmnl_+ASM1

grid    14633     1  0 11:17 ?        00:00:01 asm_lck0_+ASM1

grid    14669     1  0 11:17 ?        00:00:00 asm_asmb_+ASM1

grid    14671     1  0 11:17 ?        00:00:00 oracle+ASM1_asmb_+asm1(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

grid    26761     1  0 16:23 ?        00:00:00 asm_o001_+ASM1

grid    26764     1  0 16:23 ?        00:00:00 oracle+ASM1_o001_+asm1(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

root    26987  4264  0 16:28 pts/1    00:00:00 grep asm

3、3、2、2 强制kill asm实例进程

 [root@racnode1bin]# kill -9  14577

[root@racnode1 bin]# ps -ef | grep asm

root    27157  4264  0 16:31 pts/1    00:00:00 grep asm

[root@racnode1 bin]# ps -ef | grep asm

root    27227  4264  0 16:32 pts/1    00:00:00 grep asm

[root@racnode1 bin]#

重建disk asm时还是报错

[root@racnode1 bin]# /usr/sbin/oracleasmcreatedisk VOL1 /dev/sdb1

Unable to open device"/dev/sdb1": Device or resource busy

[root@racnode1 bin]#

3、3、2、3 查看cssd(同步服务进程)

[root@racnode1 bin]# ps -ef | grep d.bin

grid    14279     1  0 11:16 ?        00:00:00/u01/app/11.2.0/grid/bin/gipcd.bin

root    14440     1  0 11:16 ?        00:00:24/u01/app/11.2.0/grid/bin/cssdagent

grid    14458     1  0 11:16 ?        00:01:13/u01/app/11.2.0/grid/bin/ocssd.bin

grid    14461     1  0 11:16 ?        00:00:14/u01/app/11.2.0/grid/bin/diskmon.bin -d -f

root    26588  4264  0 16:22 pts/1    00:00:00/u01/app/11.2.0/grid/bin/crsctl.bin stop crs -f

root    27260  4264  0 16:32 pts/1    00:00:00 grep d.bin

[root@racnode1 bin]# kill -9 4264

 [root@racnode1 ~]# kill -9 14279

[root@racnode1 ~]# ps -ef | grep d.bin

root    14440     1  0 11:16 ?        00:00:24/u01/app/11.2.0/grid/bin/cssdagent

grid    14458     1  0 11:16 ?        00:01:13 /u01/app/11.2.0/grid/bin/ocssd.bin

grid    14461     1  0 11:16 ?        00:00:14/u01/app/11.2.0/grid/bin/diskmon.bin -d -f

root    27316 27275  0 16:34 pts/1    00:00:00 grep d.bin

[root@racnode1 ~]# kill -9 1440

-bash: kill: (1440) - No such process

[root@racnode1 ~]# kill -9 14440

[root@racnode1 ~]# kill -9 14458

[root@racnode1 ~]# kill -9 14461

-bash: kill: (14461) - No such process

 [root@racnode1 ~]# ps -ef | grep d.bin

root    27337 27275  0 16:35 pts/1    00:00:00 grep d.bin

[root@racnode1 ~]#

[root@racnode1 ~]# ps -ef | grep asm

root    27342 27275  0 16:35 pts/1    00:00:00 grep asm

[root@racnode1 ~]# ps -ef | grep ora

root     2240  2216  0 09:17 ?        00:00:07 hald-addon-storage: polling/dev/hdc

root    27347 27275  0 16:35 pts/1    00:00:00 grep ora

3、3、2、4 重建disk

[root@racnode1 ~]# /usr/sbin/oracleasmcreatedisk VOL1 /dev/sdb1

Writing disk header: done

Instantiating disk: done

[root@racnode1 ~]#

 [root@racnode1~]# /usr/sbin/oracleasm listdisks

DISK01

DISK02

DISK03

VOL1

VOL2

VOL3

VOL4

VOL5

三、restore  ocr

1、启动crs到exclusive(独占模式)

1、1 正常启动服务

[root@racnode1 ~]# crsctl start crs

CRS-4123: Oracle High Availability Serviceshas been started.

查看alert日志:

[ohasd(27446)]CRS-2112:The OLR servicestarted on node racnode1.

2014-01-10 16:38:29.016

[ohasd(27446)]CRS-2772:Server 'racnode1'has been assigned to pool 'Free'.

2014-01-10 16:38:29.086

[ohasd(27446)]CRS-8017:location:/etc/oracle/lastgasp has 124 reboot advisory log files, 0 were announced and 0errors occurred

2014-01-10 16:38:31.272

[ohasd(27446)]CRS-2302:Cannot get GPnPprofile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).

2014-01-10 16:38:32.823

[cssd(27659)]CRS-1713:CSSD daemon isstarted in clustered mode

2014-01-10 16:38:39.075

[cssd(27659)]CRS-1603:CSSD on node racnode1shutdown by user.

2014-01-10 16:38:39.198

[ohasd(27446)]CRS-2765:Resource'ora.cssdmonitor' has failed on server 'racnode1'.

[client(27835)]CRS-10001:ACFS-9327:Verifying ADVM/ACFS devices.

[client(27843)]CRS-10001:ACFS-9322: done.

2014-01-10 16:38:50.832

[cssd(27856)]CRS-1713:CSSD daemon isstarted in clustered mode

2014-01-10 16:38:51.106

[cssd(27856)]CRS-1603:CSSD on node racnode1shutdown by user.

2014-01-10 16:38:57.950

[ohasd(27446)]CRS-2765:Resource'ora.diskmon' has failed on server 'racnode1'.

2014-01-10 16:43:07.412

[/u01/app/11.2.0/grid/bin/cssdmonitor(27793)]CRS-5822:Agent'/u01/app/11.2.0/grid/bin/cssdmonitor_root' disconnected from server. Detailsat (:CRSAGF00117:) in/u01/app/11.2.0/grid/log/racnode1/agent/ohasd/oracssdmonitor_root/oracssdmonitor_root.log.

2014-01-10 16:43:07.413

[/u01/app/11.2.0/grid/bin/orarootagent.bin(27631)]CRS-5822:Agent'/u01/app/11.2.0/grid/bin/orarootagent_root' disconnected from server. Detailsat (:CRSAGF00117:) in/u01/app/11.2.0/grid/log/racnode1/agent/ohasd/orarootagent_root/orarootagent_root.log.

2014-01-10 16:43:07.415

[/u01/app/11.2.0/grid/bin/cssdagent(27811)]CRS-5822:Agent'/u01/app/11.2.0/grid/bin/cssdagent_root' disconnected from server. Details at(:CRSAGF00117:) in /u01/app/11.2.0/grid/log/racnode1/agent/ohasd/oracssdagent_root/oracssdagent_root.log.

2014-01-10 16:45:07.474

由此可知crs启动失败。关闭crs服务

[root@racnode1 ~]# crsctl stop crs -f

CRS-2791: Starting shutdown of Oracle HighAvailability Services-managed resources on 'racnode1'

CRS-2673: Attempting to stop 'ora.mdnsd' on'racnode1'

CRS-2677: Stop of 'ora.mdnsd' on 'racnode1'succeeded

 

[1]+ Stopped                 crsctlstop crs -f

[root@racnode1 ~]# ps -ef | grep asm

root    28116 27275  0 16:42 pts/1    00:00:00 grep asm

[root@racnode1 ~]# ps -ef | grep d.bin

root    27446     1  0 16:38 ?        00:00:01/u01/app/11.2.0/grid/bin/ohasd.bin reboot

grid    27566     1  0 16:38 ?        00:00:00/u01/app/11.2.0/grid/bin/oraagent.bin

grid    27581     1  0 16:38 ?        00:00:00/u01/app/11.2.0/grid/bin/gipcd.bin

grid    27598     1  0 16:38 ?        00:00:00/u01/app/11.2.0/grid/bin/gpnpd.bin

root    27631     1  0 16:38 ?        00:00:00/u01/app/11.2.0/grid/bin/orarootagent.bin

grid    27702     1  0 16:38 ?        00:00:00/u01/app/11.2.0/grid/bin/evmd.bin

root    27793     1  0 16:38 ?        00:00:00/u01/app/11.2.0/grid/bin/cssdmonitor

root    27811     1  0 16:38 ?        00:00:00/u01/app/11.2.0/grid/bin/cssdagent

grid    27885     1  0 16:38 ?        00:00:00/u01/app/11.2.0/grid/bin/diskmon.bin -d -f

root    27957 27275  0 16:40 pts/1    00:00:00/u01/app/11.2.0/grid/bin/crsctl.bin stop crs -f

root    28124 27275  0 16:42 pts/1    00:00:00 grep d.bin

[root@racnode1 ~]# kill -9 27446

[root@racnode1 ~]# kill -9 27566

-bash: kill: (27566) - No such process

[root@racnode1 ~]# ps -ef | grep d.bin

grid    27581     1  0 16:38 ?        00:00:00/u01/app/11.2.0/grid/bin/gipcd.bin

grid    27598     1  0 16:38 ?        00:00:00/u01/app/11.2.0/grid/bin/gpnpd.bin

grid    27702     1  0 16:38 ?        00:00:00 /u01/app/11.2.0/grid/bin/evmd.bin

grid    27885     1  0 16:38 ?        00:00:00/u01/app/11.2.0/grid/bin/diskmon.bin -d -f

root    27957 27275  0 16:40 pts/1    00:00:00/u01/app/11.2.0/grid/bin/crsctl.bin stop crs -f

root    28160 27275  0 16:43 pts/1    00:00:00 grep d.bin

[root@racnode1 ~]# kill -9 27581

[root@racnode1 ~]# kill -9 27598

[root@racnode1 ~]# kill -9 27702

[root@racnode1 ~]# kill -9 27885

[root@racnode1 ~]# ps -ef | grep d.bin

root    27957 27275  0 16:40 pts/1    00:00:00 /u01/app/11.2.0/grid/bin/crsctl.binstop crs -f

root    28201 27275  0 16:44 pts/1    00:00:00 grep d.bin

[root@racnode1 ~]# kill -9 27957

[root@racnode1 ~]#

[1]+ Killed                  crsctlstop crs -f

[root@racnode1 ~]#

[root@racnode1 ~]#

[root@racnode1 ~]# ps -ef | grep d.bin

root    28212 27275  0 16:44 pts/1    00:00:00 grep d.bin

[root@racnode1 ~]# ps -ef | grep ora

root     2240  2216  0 09:17 ?        00:00:07 hald-addon-storage: polling/dev/hdc

root    28217 27275  0 16:44 pts/1    00:00:00 grep ora

1、2 启动crs到exclusive

[root@racnode1 ~]# crsctl start crs -excl

CRS-4123: Oracle High Availability Serviceshas been started.

CRS-2673: Attempting to stop'ora.drivers.acfs' on 'racnode1'

CRS-2677: Stop of 'ora.drivers.acfs' on'racnode1' succeeded

CRS-2672: Attempting to start 'ora.gipcd'on 'racnode1'

CRS-2672: Attempting to start 'ora.mdnsd'on 'racnode1'

CRS-2676: Start of 'ora.mdnsd' on'racnode1' succeeded

CRS-2676: Start of 'ora.gipcd' on'racnode1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd'on 'racnode1'

CRS-2676: Start of 'ora.gpnpd' on'racnode1' succeeded

CRS-2672: Attempting to start'ora.cssdmonitor' on 'racnode1'

CRS-2676: Start of 'ora.cssdmonitor' on'racnode1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on'racnode1'

CRS-2679: Attempting to clean 'ora.diskmon'on 'racnode1'

CRS-2681: Clean of 'ora.diskmon' on'racnode1' succeeded

CRS-2672: Attempting to start 'ora.diskmon'on 'racnode1'

CRS-2676: Start of 'ora.diskmon' on'racnode1' succeeded

CRS-2676: Start of 'ora.cssd' on 'racnode1'succeeded

CRS-2672: Attempting to start 'ora.ctssd'on 'racnode1'

CRS-2672: Attempting to start'ora.drivers.acfs' on 'racnode1'

CRS-2676: Start of 'ora.drivers.acfs' on'racnode1' succeeded

CRS-2676: Start of 'ora.ctssd' on'racnode1' succeeded

CRS-2679: Attempting to clean 'ora.asm' on'racnode1'

CRS-5011: Check of resource"+ASM" failed: details at "(:CLSN00006:)" in"/u01/app/11.2.0/grid/log/racnode1/agent/ohasd/oraagent_grid/oraagent_grid.log"

ORA-01034: ORACLE not available

ORA-27101: shared memory realm does notexist

Linux-x86_64 Error: 2: No such file ordirectory

Process ID: 0

Session ID: 0 Serial number: 0

CRS-5011: Check of resource"+ASM" failed: details at "(:CLSN00006:)" in "/u01/app/11.2.0/grid/log/racnode1/agent/ohasd/oraagent_grid/oraagent_grid.log"

CRS-2681: Clean of 'ora.asm' on 'racnode1'succeeded

CRS-2672: Attempting to start 'ora.asm' on'racnode1'

CRS-2676: Start of 'ora.asm' on 'racnode1'succeeded

CRS-2672: Attempting to start 'ora.crsd' on'racnode1'

CRS-2676: Start of 'ora.crsd' on 'racnode1'succeeded

[root@racnode1 ~]#

忽略错误信息

1、3 重建diskgroup

[root@racnode1 ~]# crscheck

-bash: crscheck: command not found

[root@racnode1 ~]#

[root@racnode1 ~]# su - grid

racnode1->

racnode1-> sqlplus /nolog

 

SQL*Plus: Release 11.2.0.1.0 Production onFri Jan 10 16:47:16 2014

 

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

 

SQL> conn / as sysasm

Connected.

SQL>

SQL>

SQL> create diskgroup ocr externalredundancy disk 'ORCL:VOL1';

 

Diskgroup created.

 

 

SQL> create spfile='+OCR' frompfile='/tmp/asmpfile0110.ora';

create spfile='+OCR' frompfile='/tmp/asmpfile0110.ora'

*

ERROR at line 1:

ORA-17502: ksfdcre:4 Failed to create file+OCR

ORA-15221: ASM operation requires compatible.asmof 11.2.0.0.0 or higher

此处报错:

 

SQL> alter diskgroup ocr set attribute'compatible.asm'='11.2.0.0.0';    

 

Diskgroup altered.

 

SQL> create spfile='+OCR' frompfile='/tmp/asmpfile0110.ora';

 

File created.

 

SQL>

SQL>

SQL> exit

Disconnected from Oracle Database 11gEnterprise Edition Release 11.2.0.1.0 - 64bit Production

With the Real Application Clusters andAutomatic Storage Management options

racnode1->

racnode1->

racnode1->

racnode1-> exit

logout

[root@racnode1 ~]# ocrconfig -showbackup

 

racnode1    2014/01/10 15:32:49    /u01/app/11.2.0/grid/cdata/racscan/backup00.ocr

 

racnode2    2014/01/06 15:20:23    /u01/app/11.2.0/grid/cdata/racscan/backup01.ocr

 

racnode1    2014/01/03 15:09:08     /u01/app/11.2.0/grid/cdata/racscan/backup02.ocr

 

racnode1    2014/01/10 15:32:49    /u01/app/11.2.0/grid/cdata/racscan/day.ocr

 

racnode1    2014/01/03 15:09:08    /u01/app/11.2.0/grid/cdata/racscan/week.ocr

 

racnode1    2014/01/10 16:02:42     /u01/app/11.2.0/grid/cdata/racscan/backup_20140110_160242.ocr

 

racnode1    2014/01/03 11:31:05    /u01/app/11.2.0/grid/cdata/racscan/backup_20140103_113105.ocr

 

racnode1    2014/01/03 09:59:18    /u01/app/11.2.0/grid/cdata/racscan/backup_20140103_095918.ocr

 

racnode1    2014/01/03 09:55:16    /u01/app/11.2.0/grid/cdata/racscan/backup_20140103_095516.ocr

1、4  restore ocr

[root@racnode1 ~]# ocrconfig -restore

/u01/app/11.2.0/grid/cdata/racscan/backup_20140110_160242.ocr

PROT-19: Cannot proceed while the ClusterReady Service is running

此处显示运行了crsd服务,切换到grid下查看

 [root@racnode1~]# su - grid

racnode1->

racnode1->

racnode1-> asmcmd

ASMCMD>

ASMCMD> ls -l

State   Type    Rebal  Name

MOUNTED NORMAL  N      DATA2/

MOUNTED EXTERN  N      DATEGROUP/

MOUNTED EXTERN  N      FLASHGROUP/

MOUNTED EXTERN  N      OCR/

ASMCMD> cd ocr

ASMCMD> ls -l

Type Redund  Striped  Time             Sys  Name

                                        Y    racscan/

ASMCMD> cd racscan

ASMCMD> ls -l

Type Redund  Striped  Time            Sys  Name

                                        Y    ASMPARAMETERFILE/

                                        Y    OCRFILE/

ASMCMD> cd ocrfile

ASMCMD> ls -l

Type    Redund  Striped  Time             Sys  Name

OCRFILE UNPROT  COARSE   JAN 10 17:00:00  Y   REGISTRY.255.836499287

ASMCMD>

ASMCMD>

ASMCMD> exit

 在ocr磁盘组上已经有ocr文件了,所有报错,此处不知道是什么原因,可能是启动是将镜像的ocr file拷贝过来了。

 

racnode1-> crsctl check crs

CRS-4638: Oracle High Availability Servicesis online

CRS-4692: Cluster Ready Services is onlinein exclusive mode

CRS-4529: Cluster Synchronization Servicesis online

racnode1-> ocrcheck

Status of Oracle Cluster Registry is asfollows :

        Version                  :          3

        Total space (kbytes)     :     262120

        Used space (kbytes)      :       2808

        Available space (kbytes) :    259312

        ID                       :1160667401

        Device/File Name         :       +OCR

                                    Device/Fileintegrity check succeeded

        Device/File Name         :     +DATA2

                                    Device/Fileintegrity check succeeded

 

                                    Device/Filenot configured

 

                                    Device/Filenot configured

 

                                    Device/Filenot configured

 

        Cluster registry integrity check succeeded

 

        Logical corruption check bypassed due to non-privileged user

1、5  recovery votedisk

1、5、1 检查votedisk信息

racnode1-> crsctl query css votedisk

Located 0 voting disk(s).

1、5、2  recovery votedisk

racnode1-> crsctl replace votedisk +OCR

Successful addition of voting disk048b0833c5214f51bfcad27eed6d45ca                                                                                                                                                                                                                               .

Successfully replaced voting disk groupwith +OCR.

CRS-4266: Voting file(s) successfullyreplaced

 

racnode1-> crsctl query css votedisk

## STATE    File Universal Id                File Name Disk group

-- -----    -----------------                --------- ---------

 1.ONLINE   048b0833c5214f51bfcad27eed6d45ca(ORCL:VOL1) [OCR]

Located 1 voting disk(s).

1、6 重启crs服务

1、6、1  stop crs excl

racnode1->

racnode1-> crsctl stop crs

CRS-4563: Insufficient user privileges.

CRS-4000: Command Stop failed, or completedwith errors.

racnode1-> exit

logout

[root@racnode1 ~]# crsctl stop crs

CRS-2791: Starting shutdown of Oracle HighAvailability Services-managed resources on 'racnode1'

CRS-2673: Attempting to stop 'ora.crsd' on'racnode1'

CRS-2677: Stop of 'ora.crsd' on 'racnode1'succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on'racnode1'

CRS-2673: Attempting to stop 'ora.cssdmonitor'on 'racnode1'

CRS-2673: Attempting to stop 'ora.ctssd' on'racnode1'

CRS-2673: Attempting to stop 'ora.asm' on'racnode1'

CRS-2673: Attempting to stop'ora.drivers.acfs' on 'racnode1'

CRS-2673: Attempting to stop 'ora.mdnsd' on'racnode1'

CRS-2677: Stop of 'ora.cssdmonitor' on'racnode1' succeeded

CRS-2677: Stop of 'ora.gpnpd' on 'racnode1'succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'racnode1'succeeded

CRS-2677: Stop of 'ora.ctssd' on 'racnode1'succeeded

CRS-2677: Stop of 'ora.drivers.acfs' on'racnode1' succeeded

CRS-2677: Stop of 'ora.asm' on 'racnode1'succeeded

CRS-2673: Attempting to stop 'ora.cssd' on'racnode1'

CRS-2677: Stop of 'ora.cssd' on 'racnode1'succeeded

CRS-2673: Attempting to stop 'ora.diskmon'on 'racnode1'

CRS-2673: Attempting to stop 'ora.gipcd' on'racnode1'

CRS-2677: Stop of 'ora.gipcd' on 'racnode1'succeeded

CRS-2677: Stop of 'ora.diskmon' on'racnode1' succeeded

CRS-2793: Shutdown of Oracle HighAvailability Services-managed resources on 'racnode1' has completed

CRS-4133: Oracle High Availability Serviceshas been stopped.

1、6、2 crsctl start crs

[root@racnode1 ~]# crsctl start crs

CRS-4123: Oracle High Availability Serviceshas been started.

[root@racnode1 ~]# crs_stat -t -v

Name           Type           R/RA  F/FT   Target    State    Host       

----------------------------------------------------------------------

ora.DATA2.dg   ora....up.type 0/5    0/    ONLINE    ONLINE    racnode1   

ora....ROUP.dg ora....up.type 0/5    0/    ONLINE    ONLINE    racnode1   

ora....ROUP.dg ora....up.type 0/5    0/    ONLINE    ONLINE    racnode1   

ora....ER.lsnr ora....er.type 0/5    0/    ONLINE    ONLINE    racnode1   

ora....N1.lsnr ora....er.type 0/5    0/0   ONLINE    ONLINE    racnode1   

ora.OCR.dg     ora....up.type 0/5    0/    ONLINE    ONLINE    racnode1   

ora.asm        ora.asm.type   0/5   0/     ONLINE    ONLINE   racnode1   

ora.devdb.db   ora....se.type 0/2    0/1   OFFLINE   OFFLINE              

ora.eons       ora.eons.type  0/3   0/     ONLINE    ONLINE   racnode1   

ora.gsd        ora.gsd.type   0/5   0/     OFFLINE   OFFLINE              

ora....network ora....rk.type 0/5    0/    ONLINE    ONLINE    racnode1   

ora.oc4j       ora.oc4j.type  0/5   0/0    OFFLINE   OFFLINE              

ora.ons        ora.ons.type   0/3   0/     ONLINE    ONLINE   racnode1   

ora....SM1.asm application    0/5   0/0    ONLINE    ONLINE   racnode1   

ora....E1.lsnr application    0/5   0/0    ONLINE    ONLINE   racnode1   

ora....de1.gsd application    0/5   0/0    OFFLINE   OFFLINE              

ora....de1.ons application    0/3   0/0    ONLINE    ONLINE   racnode1   

ora....de1.vip ora....t1.type 0/0    0/0   ONLINE    ONLINE    racnode1   

ora....de2.vip ora....t1.type 0/0    0/0   ONLINE    ONLINE    racnode1   

ora....ry.acfs ora....fs.type 0/5    0/    ONLINE    ONLINE    racnode1   

ora.scan1.vip  ora....ip.type 0/0    0/0   ONLINE    ONLINE    racnode1   

 

同时racnode2上启动服务

 

[root@racnode1 ~]# crs_stat -t -v

Name           Type           R/RA   F/FT  Target    State     Host       

----------------------------------------------------------------------

ora.DATA2.dg   ora....up.type 0/5    0/    ONLINE    ONLINE    racnode1   

ora....ROUP.dg ora....up.type 0/5    0/    ONLINE    ONLINE    racnode1   

ora....ROUP.dg ora....up.type 0/5    0/    ONLINE    ONLINE    racnode1   

ora....ER.lsnr ora....er.type 0/5    0/    ONLINE    ONLINE    racnode1   

ora....N1.lsnr ora....er.type 0/5    0/0   ONLINE    ONLINE    racnode1   

ora.OCR.dg     ora....up.type 0/5    0/    ONLINE    ONLINE    racnode1   

ora.asm        ora.asm.type   0/5   0/     ONLINE    ONLINE   racnode1   

ora.devdb.db   ora....se.type 0/2    0/1   OFFLINE   OFFLINE              

ora.eons       ora.eons.type  0/3   0/     ONLINE    ONLINE   racnode1   

ora.gsd        ora.gsd.type   0/5   0/     OFFLINE   OFFLINE              

ora....network ora....rk.type 0/5    0/    ONLINE    ONLINE   racnode1   

ora.oc4j       ora.oc4j.type  0/5   0/0    OFFLINE   OFFLINE              

ora.ons        ora.ons.type   0/3   0/     ONLINE    ONLINE   racnode1   

ora....SM1.asm application    0/5   0/0    ONLINE    ONLINE   racnode1   

ora....E1.lsnr application    0/5   0/0    ONLINE    ONLINE   racnode1   

ora....de1.gsd application    0/5   0/0    OFFLINE   OFFLINE              

ora....de1.ons application    0/3   0/0    ONLINE    ONLINE   racnode1   

ora....de1.vip ora....t1.type 0/0    0/0   ONLINE    ONLINE    racnode1   

ora....SM2.asm application    0/5   0/0    ONLINE    ONLINE   racnode2   

ora....E2.lsnr application    0/5   0/0    ONLINE    ONLINE   racnode2   

ora....de2.gsd application    0/5   0/0    OFFLINE   OFFLINE              

ora....de2.ons application    0/3   0/0    ONLINE    ONLINE   racnode2   

ora....de2.vip ora....t1.type 0/0    0/0   ONLINE    ONLINE    racnode2   

ora....ry.acfs ora....fs.type 0/5    0/    ONLINE    ONLINE    racnode1   

ora.scan1.vip  ora....ip.type 0/0    0/0   ONLINE    ONLINE    racnode1   

1、6、3 检查服务状态

[root@racnode1 ~]# cluvfy -h

You must NOT be logged in as root (uid=0)when running /u01/app/11.2.0/grid/bin/cluvfy.

[root@racnode1 ~]# cluvfy comp ocr -n all

You must NOT be logged in as root (uid=0)when running /u01/app/11.2.0/grid/bin/cluvfy.

[root@racnode1 ~]# su - grid

racnode1-> cluvfy -h

 

ERROR:

Invalid command line syntax.

 

USAGE:

cluvfy [-help]

cluvfy stage {-list|-help}

cluvfy stage {-pre|-post} <stage-name><stage-specific options> [-verbose]

cluvfy comp {-list|-help}

cluvfy comp <component-name> <component-specific options>  [-verbose]

 

 

racnode1-> cluvfy comp -h

 

ERROR:

Invalid command line syntax.

 

USAGE:

cluvfy comp <component-name> <component-specific options>  [-verbose]

 

 

SYNTAX (for Components):

cluvfy comp nodereach -n <node_list>[-srcnode <srcnode>]  [-verbose]

cluvfy comp nodecon -n <node_list>[-i <interface_list>]  [-verbose]

cluvfy comp cfs  [-n <node_list>] -f <file_system>  [-verbose]

cluvfy comp ssa  [-n <node_list>]  [-s <storageID_list>]

                 [-t{software|data|ocr_vdisk}]  [-verbose]

cluvfy comp space  [-n <node_list>] -l<storage_location>

                   -z<disk_space>{B|K|M|G}  [-verbose]

cluvfy comp sys  [-n <node_list>] -p {crs|ha|database}

                 [-r {10gR1|10gR2|11gR1|11gR2}]

                 [-osdba<osdba_group>]  [-orainv<orainventory_group>]

                 [-fixup [-fixupdir<fixup_dir>]] [-verbose]

cluvfy comp clu  [-n <node_list>]  [-verbose]

cluvfy comp clumgr  [-n <node_list>]  [-verbose]

cluvfy comp ocr  [-n <node_list>]  [-verbose]

cluvfy comp olr  [-verbose]

cluvfy comp ha   [-verbose]

cluvfy comp crs  [-n <node_list>]  [-verbose]

cluvfy comp nodeapp  [-n <node_list>]  [-verbose]

cluvfy comp admprv  [-n <node_list>]  [-verbose]

       -o user_equiv  [-sshonly]

       -o crs_inst [-orainv <orainventory_group>]

                    [-fixup [-fixupdir<fixup_dir>]]

       -o db_inst  [-osdba<osdba_group>]

                    [-fixup [-fixupdir <fixup_dir>]]

       -o db_config  -d<oracle_home>

                    [-fixup [-fixupdir<fixup_dir>]]

cluvfy comp peer [-refnode <refnode>]-n <node_list>

                 [-r {10gR1|10gR2|11gR1|11gR2}]

                 [-orainv<orainventory_group>] [-osdba <osdba_group>]

                 [-verbose]

cluvfy comp software   [-n <node_list>]  [ -d <oracle_home> [ -r{10gR1|10gR2|11gR1|11gR2}] ]  [-verbose]

cluvfy comp acfs  [-n <node_list>]  [-f <file_system>]  [-verbose]

cluvfy comp asm  [-n <node_list>]  [-verbose]

cluvfy comp gpnp [-n<node_list>]  [-verbose]

cluvfy comp gns [-n <node_list>]  [-verbose]

cluvfy comp scan  [-verbose]

cluvfy comp ohasd [-n<node_list>]  [-verbose]

cluvfy comp clocksync  [-noctss] [ -n <node_list>] [-verbose]

cluvfy comp vdisk [ -n<node_list>]  [-verbose]

 

racnode1-> cluvfy comp ocr -n all

 

Verifying OCR integrity

 

Checking OCR integrity...

 

Checking the absence of a non-clusteredconfiguration...

All nodes free of non-clustered, local-onlyconfigurations

 

 

ASM Running check passed. ASM is running onall cluster nodes

 

Checking OCR config file"/etc/oracle/ocr.loc"...

 

OCR config file"/etc/oracle/ocr.loc" check successful

 

 

Disk group for ocr location"+DATA2" available on all the nodes

 

 

Disk group for ocr location"+OCR" available on all the nodes

 

 

Checking size of the OCR location"+DATA2" ...

 

Size check for OCR location"+DATA2" successful...

Size check for OCR location"+DATA2" successful...

 

Checking size of the OCR location"+OCR" ...

 

Size check for OCR location"+OCR" successful...

Size check for OCR location"+OCR" successful...

 

WARNING:

This check does not verify the integrity ofthe OCR contents. Execute 'ocrcheck' as a privileged user to verify thecontents of OCR.

 

OCR integrity check passed

 

Verification of OCR integrity wassuccessful.

racnode1-> ocrcheck

Status of Oracle Cluster Registry is asfollows :

        Version                  :          3

        Total space (kbytes)     :     262120

        Used space (kbytes)      :       2808

        Available space (kbytes) :    259312

        ID                       :1160667401

        Device/File Name         :       +OCR

                                    Device/Fileintegrity check succeeded

        Device/File Name         :     +DATA2

                                    Device/Fileintegrity check succeeded

 

                                    Device/Filenot configured

 

                                    Device/Filenot configured

 

                                    Device/File not configured

 

        Cluster registry integrity check succeeded

 

        Logical corruption check bypassed due to non-privileged user

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值