解决oracle11卸载,oracle11g 卸载rac

本文详细介绍了如何卸载 Oracle RAC 数据库实例及软件,包括停止实例、卸载数据库、删除 GI 软件等步骤,并提供了清理残留文件和用户的方法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

一、卸载数据库实例

1、先停止所有节点实例

$ su - oracle

$ srvctl stop database -d wxqyh -o immediate

2、在其中一节点操作,先关闭rac模式。

[oracle@rac1 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Apr 24 17:33:43 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount

ORACLE instance started.

Total System Global Area  914440192 bytes

Fixed Size      2258600 bytes

Variable Size    360712536 bytes

Database Buffers   545259520 bytes

Redo Buffers      6209536 bytes

3、关闭集群模式才能运行

SQL> alter system set CLUSTER_DATABASE=FALSE scope=spfile;

System altered.

SQL> shutdown abort

ORACLE instance shut down.

SQL> startup mount restrict

ORACLE instance started.

Total System Global Area  914440192 bytes

Fixed Size      2258600 bytes

Variable Size    297797976 bytes

Database Buffers   608174080 bytes

Redo Buffers      6209536 bytes

Database mounted.

4、卸载数据库

SQL> drop database;

Database dropped.

5、把数据库在CRS注册信息删除

[oracle@rac1 ~]$ srvctl remove database -d wxqyh

Remove the database wxqyh? (y/[n]) y

二、卸载oracle软件

1、cd到卸载软件所在的目录

[oracle@rac2 ~]$ cd $ORACLE_HOME/deinstall

2、每个节点执行卸载数据库软件(或直接全部卸掉 ./deinstall)

[oracle@rac2 deinstall]$ ./deinstall -local

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################

## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/dbhome_1

Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database

Oracle Base selected for deinstall is: /u01/app/oracle

Checking for existence of central inventory location /u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid

The following nodes are part of this cluster: rac1,rac2

Checking for sufficient temp space availability on node(s) : 'rac1,rac2'

## [END] Install check configuration ##

Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2017-04-24_05-48-23-PM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2017-04-24_05-48-25-PM.log

Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2017-04-24_05-48-27-PM.log

Enterprise Manager Configuration Assistant END

Oracle Configuration Manager check START

OCM check log file location : /u01/app/oraInventory/logs//ocm_check9203.log

Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid

The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1,rac2

Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac2', and the global configuration will be removed.

Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/dbhome_1

Inventory Location where the Oracle home registered is: /u01/app/oraInventory

The option -local will not modify any database configuration for this Oracle home.

No Enterprise Manager configuration to be updated for any database(s)

No Enterprise Manager ASM targets to update

No Enterprise Manager listener targets to migrate

Checking the config status for CCR

rac1 : Oracle Home exists with CCR directory, but CCR is not configured

rac2 : Oracle Home exists with CCR directory, but CCR is not configured

CCR check is finished

Do you want to continue (y - yes, n - no)? [n]: y

A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2017-04-24_05-48-17-PM.out'

Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2017-04-24_05-48-17-PM.err'

######################## CLEAN OPERATION START ########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2017-04-24_05-48-27-PM.log

Updating Enterprise Manager ASM targets (if any)

Updating Enterprise Manager listener targets (if any)

Enterprise Manager Configuration Assistant END

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2017-04-24_05-52-38-PM.log

Network Configuration clean config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2017-04-24_05-52-38-PM.log

De-configuring Local Net Service Names configuration file...

Local Net Service Names configuration file de-configured successfully.

De-configuring backup files...

Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START

OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean9203.log

Oracle Configuration Manager clean END

Setting the force flag to false

Setting the force flag to cleanup the Oracle Base

Oracle Universal Installer clean START

Detach Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node : Done

Delete directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2017-04-24_05-47-46PM' on node 'rac2'

## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################

Cleaning the config for CCR

As CCR is not configured, so skipping the cleaning of CCR configuration

CCR clean is finished

Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node.

Successfully deleted directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node.

Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.

#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

三、删除GI软件

1、切换到grid用户任意节点执行,切换到卸载软件所在位置

[root@rac2 Desktop]# su - grid

[grid@rac1 ~]$ cd $ORACLE_HOME/deinstall

2、卸载GI

2.1 执行deinstall,流程大致:回车->y->回车->回车->回车->y->y->各节点用root用户执行提示的命令->回车

[grid@rac1 deinstall]$ ./deinstall

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /tmp/deinstall2017-04-24_05-58-35PM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################

## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/11.2.0/grid

Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster

Oracle Base selected for deinstall is: /u01/app/grid

Checking for existence of central inventory location /u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid

The following nodes are part of this cluster: rac1,rac2

Checking for sufficient temp space availability on node(s) : 'rac1,rac2'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2017-04-24_05-58-35PM/logs//crsdc.log

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2017-04-24_05-58-35PM/logs/netdc_check2017-04-24_05-59-08-PM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2017-04-24_05-58-35PM/logs/asmcadc_check2017-04-24_05-59-34-PM.log

Automatic Storage Management (ASM) instance is detected in this Oracle home /u01/app/11.2.0/grid.

ASM Diagnostic Destination : /u01/app/grid

ASM Diskgroups : +DATA,+FRA,+OCRVOTE

ASM diskstring : /dev/oracleasm/disks/*

Diskgroups will be dropped

De-configuring ASM will drop all the diskgroups and it's contents at cleanup time. This will affect all of the databases and ACFS that use this ASM instance(s).

If you want to retain the existing diskgroups or if any of the information detected is incorrect, you can modify by entering 'y'. Do you  want to modify above information (y|n) [n]: y

Specify the ASM Diagnostic Destination [/u01/app/grid]:

Specify the diskstring [/dev/oracleasm/disks/*]:

Specify the diskgroups that are managed by this ASM instance [+DATA,+FRA,+OCRVOTE]:

De-configuring ASM will drop the diskgroups at cleanup time. Do you want deconfig tool to drop the diskgroups y|n [y]: y

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid

The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1,rac2

Oracle Home selected for deinstall is: /u01/app/11.2.0/grid

Inventory Location where the Oracle home registered is: /u01/app/oraInventory

Following RAC listener(s) will be de-configured: LISTENER

ASM instance will be de-configured from this Oracle home

Do you want to continue (y - yes, n - no)? [n]: y

A log of this session will be written to: '/tmp/deinstall2017-04-24_05-58-35PM/logs/deinstall_deconfig2017-04-24_05-59-01-PM.out'

Any error messages from this session will be written to: '/tmp/deinstall2017-04-24_05-58-35PM/logs/deinstall_deconfig2017-04-24_05-59-01-PM.err'

######################## CLEAN OPERATION START ########################

ASM de-configuration trace file location: /tmp/deinstall2017-04-24_05-58-35PM/logs/asmcadc_clean2017-04-24_06-00-29-PM.log

ASM Clean Configuration START

ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2017-04-24_05-58-35PM/logs/netdc_clean2017-04-24_06-01-01-PM.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER

Stopping listener: LISTENER

Listener stopped successfully.

Unregistering listener: LISTENER

Listener unregistered successfully.

Listener de-configured successfully.

De-configuring Naming Methods configuration file on all nodes...

Naming Methods configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file on all nodes...

Local Net Service Names configuration file de-configured successfully.

De-configuring Directory Usage configuration file on all nodes...

Directory Usage configuration file de-configured successfully.

De-configuring backup files on all nodes...

Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rac2".

/tmp/deinstall2017-04-24_05-58-35PM/perl/bin/perl -I/tmp/deinstall2017-04-24_05-58-35PM/perl/lib -I/tmp/deinstall2017-04-24_05-58-35PM/crs/install /tmp/deinstall2017-04-24_05-58-35PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2017-04-24_05-58-35PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Run the following command as the root user or the administrator on node "rac1".

/tmp/deinstall2017-04-24_05-58-35PM/perl/bin/perl -I/tmp/deinstall2017-04-24_05-58-35PM/perl/lib -I/tmp/deinstall2017-04-24_05-58-35PM/crs/install /tmp/deinstall2017-04-24_05-58-35PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2017-04-24_05-58-35PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode

Press Enter after you finish running the above commands

#############各节点按要求用root用户执行提示的命令

---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Press Enter after you finish running the above commands

Setting the force flag to false

Setting the force flag to cleanup the Oracle Base

Oracle Universal Installer clean START

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done

Delete directory '/u01/app/11.2.0/grid' on the local node : Done

Delete directory '/u01/app/oraInventory' on the local node : Done

Delete directory '/u01/app/grid' on the local node : Done

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'rac2' : Done

Delete directory '/u01/app/11.2.0/grid' on the remote nodes 'rac2' : Done

Delete directory '/u01/app/oraInventory' on the remote nodes 'rac2' : Done

Delete directory '/u01/app/grid' on the remote nodes 'rac2' : Done

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2017-04-24_05-58-35PM' on node 'rac1'

Clean install operation removing temporary directory '/tmp/deinstall2017-04-24_05-58-35PM' on node 'rac2'

## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################

ASM instance was de-configured successfully from the Oracle home

Following RAC listener(s) were de-configured successfully: LISTENER

Oracle Clusterware is stopped and successfully de-configured on node "rac1"

Oracle Clusterware is stopped and successfully de-configured on node "rac2"

Oracle Clusterware is stopped and de-configured successfully.

Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.

Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.

Successfully deleted directory '/u01/app/oraInventory' on the local node.

Successfully deleted directory '/u01/app/grid' on the local node.

Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'rac2'.

Successfully deleted directory '/u01/app/11.2.0/grid' on the remote nodes 'rac2'.

Successfully deleted directory '/u01/app/oraInventory' on the remote nodes 'rac2'.

Successfully deleted directory '/u01/app/grid' on the remote nodes 'rac2'.

Oracle Universal Installer cleanup was successful.

Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac1,rac2' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac1,rac2' at the end of the session.

Run 'rm -rf /etc/oratab' as root on node(s) 'rac1' at the end of the session.

Oracle deinstall tool successfully cleaned up temporary directories.

#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

2.2 期间需按要求执行命令,节点2用root用户开启个窗口

root用户执行以下命令:

[root@rac1 trace]# /tmp/deinstall2017-04-24_05-58-35PM/perl/bin/perl -I/tmp/deinstall2017-04-24_05-58-35PM/perl/lib -I/tmp/deinstall2017-04-24_05-58-35PM/crs/install /tmp/deinstall2017-04-24_05-58-35PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2017-04-24_05-58-35PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode

Using configuration parameter file: /tmp/deinstall2017-04-24_05-58-35PM/response/deinstall_Ora11g_gridinfrahome1.rsp

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

Error occurred during initialization of VM

java.lang.Error: Properties init: Could not determine current working directory.

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

Error occurred during initialization of VM

java.lang.Error: Properties init: Could not determine current working directory.

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

Error occurred during initialization of VM

java.lang.Error: Properties init: Could not determine current working directory.

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

Error occurred during initialization of VM

java.lang.Error: Properties init: Could not determine current working directory.

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

Error occurred during initialization of VM

java.lang.Error: Properties init: Could not determine current working directory.

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'

CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac1'

CRS-2673: Attempting to stop 'ora.oc4j' on 'rac1'

CRS-2673: Attempting to stop 'ora.asm' on 'rac1'

CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'

CRS-2673: Attempting to stop 'ora.cvu' on 'rac1'

CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1'

CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1'

CRS-2677: Stop of 'ora.cvu' on 'rac1' succeeded

CRS-2677: Stop of 'ora.rac2.vip' on 'rac1' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded

CRS-2677: Stop of 'ora.scan1.vip' on 'rac1' succeeded

CRS-2677: Stop of 'ora.oc4j' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.ons' on 'rac1'

CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1'

CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed

CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'

CRS-2673: Attempting to stop 'ora.asm' on 'rac1'

CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'

CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'

CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'rac1'

CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'

CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded

CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

2017-04-24 18:07:35.612:

CLSD:An error was encountered while attempting to open log file "UNKNOWN". Additional diagnostics: (:CLSD00157:)

CRS-4611: Successful deletion of voting disk +OCRVOTE.

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

Error occurred during initialization of VM

java.lang.Error: Properties init: Could not determine current working directory.

de-configuration of ASM ... failed with error 1

see asmca logs at /u01/app/grid/cfgtoollogs/asmca for details

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

2017-04-24 18:07:58.388:

CLSD:An error was encountered while attempting to open log file "UNKNOWN". Additional diagnostics: (:CLSD00157:)

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'

CRS-2673: Attempting to stop 'ora.asm' on 'rac1'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'

CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'

CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.crf' on 'rac1'

CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'

CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'

CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed

CRS-4133: Oracle High Availability Services has been stopped.

Removing Trace File Analyzer

Successfully deconfigured Oracle clusterware stack on this node

2.3 操作节点上另外用root用户开启另个窗口

root用户执行以下命令:

[root@rac1 deinstall]# /tmp/deinstall2017-04-24_05-58-35PM/perl/bin/perl -I/tmp/deinstall2017-04-24_05-58-35PM/perl/lib -I/tmp/deinstall2017-04-24_05-58-35PM/crs/install /tmp/deinstall2017-04-24_05-58-35PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2017-04-24_05-58-35PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode

.......

Successfully deconfigured Oracle clusterware stack on this node

四、删除其他遗留文件和用户、用户组(各节点都执行)

1、root用户执行下面的命令

[root@rac1 ~]# rm -rf /u01/app/*

[root@rac1 ~]# rm -rf /opt/ORCLfmap

[root@rac1 ~]# rm -rf /etc/oratab

[root@rac1 ~]# rm -rf /etc/init.d/init.ohasd

[root@rac1 ~]# rm -rf /etc/init.d/ohasd

[root@rac1 ~]# rm -rf /etc/ora*

[root@rac1 ~]# rm -rf /etc/inittab.crs

[root@rac1 ~]# rm -rf /usr/local/bin/dbhome

[root@rac1 ~]# rm -rf /usr/local/bin/oraenv

[root@rac1 ~]# rm -rf /usr/local/bin/coraenv

[root@rac1 ~]# rm -rf /tmp/CVU_*

[root@rac1 ~]# rm -rf /tmp/OraInsta*

[root@rac1 ~]# rm -rf /tmp/.oracle

[root@rac1 ~]# rm -rf /var/tmp/.oracle

2、删除用户和用户组

[root@rac1 ~]# userdel grid

[root@rac1 ~]# userdel oracle

[root@rac1 ~]# groupdel oinstall

[root@rac1 ~]# groupdel asmadmin

[root@rac1 ~]# groupdel dba

[root@rac1 ~]# groupdel oper

[root@rac1 ~]# groupdel asmdba

[root@rac1 ~]# groupdel asmoper

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值