Rac11gR2OnLinux(1)

该文详细描述了如何为OracleRAC准备共享存储,包括使用ASM创建磁盘组,分区策略,以及安装和配置ASMLib以提高性能和管理性。每个节点需要外部共享磁盘来存储OracleClusterware和数据库文件,并强调了至少3个物理磁盘的高可用性需求。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

  1. Prepare the shared storage for Oracle RAC

This section describes how to prepare the shared storage for Oracle RAC

Each node in a cluster requires external shared disks for storing the Oracle Clusterware (Oracle Cluster Registry and voting disk) files, and Oracle Database files. To ensure high availability of Oracle Clusterware files on Oracle ASM, you need to have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. Each disk must have at least 1 GB capacity to ensure that there is sufficient space to create Oracle Clusterware files. Use the following guidelines when identifying appropriate disk devices:

  • All of the devices in an Automatic Storage Management diskgroup should be the same size and have the same performance characteristics.
  • A diskgroup should not contain more than one partition on a single physical disk device.
  • Using logical volumes as a device in an Automatic Storage Management diskgroup is not supported with Oracle RAC.
  • The user account with which you perform the installation (typically, 'oracle') must have write permissions to create the files in the path that you specify.

    1. Shared Storage

For this example installation we will be using ASM for Clusterware and Database storage on top of SAN technology. The following Table shows the storage layout for this implementation:

Block Device

ASMlib Name

Size

Comments

/dev/sda

OCR_VOTE01

1 GB

ASM Diskgroup for OCR and Voting Disks

/dev/sdb

OCR_VOTE02

1 GB

ASM Diskgroup for OCR and Voting Disks

/dev/sdc

OCR_VOTE03

1 GB

ASM Diskgroup for OCR and Voting Disks

/dev/sdd

ASM_DATA01

2 GB

ASM Data Diskgroup

/dev/sde

ASM_DATA02

2 GB

ASM Data Diskgroup

/dev/sdf

ASM_DATA03

2 GB

ASM Data Diskgroup

/dev/sdg

ASM_DATA04

2 GB

ASM Data Diskgroup

/dev/sdh

ASM_DATA05

2 GB

ASM Flash Recovery Area Diskgroup

/dev/sdi

ASM_DATA06

2 GB

ASM Flash Recovery Area Diskgroup

/dev/sdj

ASM_DATA07

2 GB

ASM Flash Recovery Area Diskgroup

/dev/sdk

ASM_DATA08

2 GB

ASM Flash Recovery Area Diskgroup

  1. Partition the Shared Disks

  1. Once the LUNs have been presented from the SAN to ALL servers in the cluster, partition the LUNs from one node only, run fdisk to create a single whole-disk partition with exactly 1 MB offset on each LUN to be used as ASM Disk.

Tip: From the fdisk prompt, type "u" to switch the display unit from cylinder to sector. Then create a single primary partition starting on sector 2048 (1MB offset assuming sectors of 512 bytes per unit). See below

3. Prepare the shared storage for Oracle RAC                                                                                           10

example for /dev/sda: fdisk /dev/sda

Command (m for help): u

Changing display/entry units to sectors Command (m for help): n

Command action e extended

p primary partition (1-4) p

Partition number (1-4): 1

First sector (61-1048575, default 61): 2048

Last sector or +size or +sizeM or +sizeK (2048-1048575, default 1048575): Using default value 1048575

Command (m for help): w

The partition table has been altered! Calling ioctl() to re-read partition table.

Syncing disks.

  1. Load the updated block device partition tables by running the following on ALL servers participating in the cluster:

#/sbin/partprobe

  1. Installing and Configuring ASMLib

The ASMLib is highly recommended for those systems that will be using ASM for shared storage within the cluster due to the performance and manageability benefits that it provides. Perform the following steps to install and configure ASMLib on the cluster nodes:

NOTE: ASMLib automatically provides LUN persistence, so when using ASMLib there is no need to manually configure LUN persistence for the ASM devices on the system.

  1. Download the following packages from the ASMLib OTN page, if you are an Enterprise Linux customer you can obtain the software through the Unbreakable Linux network.

NOTE: The ASMLib kernel driver MUST match the kernel revision number, the kernel revision number of your system can be identified by running the "uname -r" command. Also, be sure to download the set of RPMs which pertain to your platform architecture, in our case this is x86_64.

oracleasm-support-2.1.3-1.el5x86_64.rpm oracleasmlib-2.0.4-1.el5.x86_64.rpm

oracleasm-2.6.18-92.1.17.0.2.el5-2.0.5-1.el5.x86_64.rpm

  1. Install the RPMs by running the following as the root user: # rpm -ivh oracleasm-support-1.3-1.el5x86_64.rpm \

oracleasmlib-2.0.4-1.el5.x86_64.rpm \

oracleasm-2.6.18-92.1.17.0.2.el5-2.0.5-1.el5.x86_64.rpm

      1. Partition the Shared Disks                                                                                                               11

  1. Configure ASMLib by running the following as the root user:

NOTE: If using user and group separation for the installation (as documented here), the ASMLib driver interface owner is 'grid' and the group to own the driver interface is 'asmadmin'. These groups were created in section 2.1. If a more simplistic installation using only the Oracle user is performed, the owner will be 'oracle' and the group owner will be 'dba'.

#/etc/init.d/oracleasm configure Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y

Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: [ OK ]

Scanning the system for Oracle ASMLib disks: [ OK ]

  1. Repeat steps 2 - 4 on ALL cluster nodes.

3.1.3. Using ASMLib to Mark the Shared Disks as Candidate Disks

To create ASM disks using ASMLib:

1. As the root user, use oracleasm to create ASM disks using the following syntax:

# /usr/sbin/oracleasm createdisk disk_name device_partition_name

In this command, disk_name is the name you choose for the ASM disk. The name you choose must contain only ASCII capital letters, numbers, or underscores, and the disk name must start with a letter, for example, DISK1 or VOL1, or RAC_FILE1. The name of the disk partition to mark as an ASM disk is the device_partition_name. For example:

# /usr/sbin/oracleasm createdisk OCR_VOTE01 /dev/sda1 # /usr/sbin/oracleasm createdisk OCR_VOTE02 /dev/sdb1 # /usr/sbin/oracleasm createdisk OCR_VOTE03 /dev/sdc1 # /usr/sbin/oracleasm createdisk ASMDATA01 /dev/sdd1

# /usr/sbin/oracleasm createdisk ASMDATA02 /dev/sde1 # /usr/sbin/oracleasm createdisk ASMDATA03 /dev/sdf1 # /usr/sbin/oracleasm createdisk ASMDATA04 /dev/sdg1 # /usr/sbin/oracleasm createdisk ASMDATA05 /dev/sdh1 # /usr/sbin/oracleasm createdisk ASMDATA06 /dev/sdi1 # /usr/sbin/oracleasm createdisk ASMDATA07 /dev/sdj1 # /usr/sbin/oracleasm createdisk ASMDATA08 /dev/sdk1

If you need to unmark a disk that was used in a createdisk command, you can use the following syntax as the

      1. Installing and Configuring ASMLib                                                                                                 12

root user:

# /usr/sbin/oracleasm deletedisk disk_name

  1. Repeat step 1 for each disk that will be used by Oracle ASM.

  1. After you have created all the ASM disks for your cluster, use the listdisks command to verify their availability:

# /usr/sbin/oracleasm listdisks OCR_VOTE01

OCR_VOTE02 OCR_VOTE03 ASMDATA01 ASMDATA02 ASMDATA03 ASMDATA04 ASMDATA05 ASMDATA06 ASMDATA07 ASMDATA08

  1. On all the other nodes in the cluster, use the scandisks command as the root user to pickup the newly created ASM disks. You do not need to create the ASM disks on each node, only on one node in the cluster.

# /usr/sbin/oracleasm scandisks Scanning system for ASM disks [ OK ]

  1. After scanning for ASM disks, display the available ASM disks on each node to verify their availability:

# /usr/sbin/oracleasm listdisks OCR_VOTE01

OCR_VOTE02 OCR_VOTE03 ASMDATA01 ASMDATA02 ASMDATA03 ASMDATA04 ASMDATA05 ASMDATA06 ASMDATA07 ASMDATA08

      1. Using ASMLib to Mark the Shared Disks as Candidate Disks
 

13

  1. Oracle Grid Infrastructure Install

    1. Basic Grid Infrastructure Install (without GNS and IPMI)

As the grid user (Grid Infrastructure software owner) start the installer by running "runInstaller" from the staged installation media.

NOTE: Be sure the installer is run as the intended software owner, the only supported method to change the software owner is to reinstall.

#xhost + #su - grid

cd into the folder where you staged the Grid Infrastructure software

./runInstaller

4. Oracle Grid Infrastructure Install                                                                                                          14

NOTE: This feature allows the installer to download mandatory patches for itself as well as for the base product at installation time so that they do not need to be applied later. It also helps resolve installation issues at the middle of a release without either recutting the media or deferring the bug fix to a later release.

Currently, when there is a bug in the base installation, you have to wait until the next release before it can be fixed. This feature helps resolve installation issues at the middle of a release without either recutting the media or deferring the bug fix to a later release. The feature also applies mandatory patches for the base product, thereby creating more certified installations out-of-box.

Action:

For this guide we skip the software updates.

Action:

Select radio button 'Install and Configure Grid Infrastructure for a Cluster' and click ' Next> '

Action:

Select radio button 'Advanced Installation' and click ' Next> '

Action:

Accept 'English' as language' and click ' Next> '

Action:

Specify your cluster name and the SCAN name you want to use and click ' Next> ' Note:

Make sure 'Configure GNS' is NOT selected.

Action:

Use the Edit and Add buttons to specify the node names and virtual IP addresses you configured previously in your /etc/hosts file. Use the 'SSH Connectivity' button to configure/test the passwordless SSH connectivity between your nodes.

ACTION:

Type in the OS password for the user 'grid' and press 'Setup'

After click ' OK '

Action:

Click on 'Interface Type' next to the Interfaces you want to use for your cluster and select the correct values for 'Public', 'Private' and 'Do Not Use' . When finished click ' Next> '

Action:

Select radio button 'Automatic Storage Management (ASM) and click ' Next> '

Action:

Select the 'DiskGroup Name' specify the 'Redundancy' and tick the disks you want to use, when done click ' Next> '

NOTE: The number of voting disks that will be created depend on the redundancy level you specify: EXTERNAL will create 1 voting disk, NORMAL will create 3 voting disks, HIGH will create 5 voting disks.

NOTE: If you see an empty screen for your candidate disks it is likely that ASMLib has not been properly configured. If you are sure that ASMLib has been properly configured click on 'Change Discovery Path' and provide the correct destination. See example below:

Action:

Specify and conform the password you want to use and click ' Next> '

Action:

Select NOT to use IPMI and click ' Next> '

Action:

Assign the correct OS groups for OS authentication and click ' Next> '

Action:

Specify the locations for your ORACLE_BASE and for the Software location and click ' Next> '

Action:

Specify the locations for your Inventory directory and click ' Next> '

Note:

OUI performs certain checks and comes back with the screen below

Action:

Check that status of all checks is Succeeded and click ' Next> ' Note:

If you have failed checks marked as 'Fixable' click 'Fix & Check again'. This will bring up the window below:

Action:

Execute the runfixup.sh script as described on the sceen as root user.

Action:

Install packages that might be missing and correct all other failed checks. If you are sure that the proper configuration is in place for a successful installation, the unsuccessful checks can be ignored. Tick the box 'Ignore All' before you click ' Next> '

Action:

Click ' Install'

Action:

Wait for the OUI to complete its tasks

Action:

At this point you may need to run oraInstRoot.sh on all cluster nodes (if this is the first installation of an Oracle product on this system).

NOTE: DO NOT run root.sh at this time, we must first install the 11.2.0.2.2 GI PSU (Patch 12311357). Action:

To apply the 11.2.0.2.2 GI PSU prior to running root.sh the following steps must be performed on EVERY node in the cluster independently. These steps are specific to applying the 11.2.0.2.2 GI PSU prior to running root.sh, this procedure is NOT documented in the 11.2.0.2.2 PSU README. If you have already run root.sh (or rootupgrade.sh) and completed the installation, the PSU must be installed per the instructions provided in the README.

  1. Install the latest version of OPatch 12 (available under Patch 6880880) into the GI Home: # unzip -d <12.0.2GI_HOME> p6880880_112000_Linux-x86-64.zip

  1. Create an EMPTY directory to stage the GI PSU as the GI software owner (our example uses a directory named gipsu):

# mkdir /u01/stage/gipsu

  1. Extract the GI PSU into the empty stage directory as the GI software owner: # unzip -d /u01/stage/gipsu p12311357_112020_Linux-x86-64.zip

  1. Apply the GI PSU portion of the patch to the newly installed 11.2.0.2.2 GI Home as the GI software owner using OPatch napply:

# <11.2.0.2GI_HOME>/OPatch/opatch napply -oh <11.2.0.2GI_HOME> -local /u01/stage/gipsu/12311357 # <11.2.0.2GI_HOME>/OPatch/opatch napply -oh <11.2.0.2GI_HOME> -local /u01/stage/gipsu/11724916

  1. Repeat the above steps 1-4 on all cluster nodes

Action:

Once the 11.2.0.2.2 GI PSU has been applied to the newly installed GI Home, you can now execute root.sh one node at a time (allowing the current node to complete prior to moving on to the next) as instructed in the

OUI popup window.

Action:

Wait for the OUI to finish the cluster configuration.

Action:

You should see the confirmation that installation of the Grid Infrastructure was successful. Click 'Close' to finish the install.

  1. Grid Infrastructure Home Patching

Assuming this RAC Guide was followed, the 11.2.0.2.2 Grid Infrastructure PSU (GI PSU #2) was installed during the Grid Infrastructure 11.2.0.2 install process. For installation of future PSUs (on a configured Grid Infrastructure Installation) you must follow the installation instructions that are contained within that respective PSU README. Information on the latest available PSUs as well as other recommended patches can be found in My Oracle Support ExtNote:756671.1.

  1. RDBMS Software Install

As the oracle user (rdbms software owner) start the installer by running "runInstaller" from the staged installation media.

NOTE: Be sure the installer is run as the intended software owner, the only supported method to change the software owner is to reinstall.

#su - oracle

change into the directory where you staged the RDBMS software

./runInstaller

5. Grid Infrastructure Home Patching                                                                                                       35

Action:

Provide your e-mail address, tick the check box and provide your Oracle Support Password if you want to receive Security Updates from Oracle Support and click ' Next> '

NOTE: This feature allows the installer to download mandatory patches for itself as well as for the base product at installation time so that they do not need to be applied later. It also helps resolve installation issues at the middle of a release without either recutting the media or deferring the bug fix to a later release.

Currently, when there is a bug in the base installation, you have to wait until the next release before it can be fixed. This feature helps resolve installation issues at the middle of a release without either recutting the media or deferring the bug fix to a later release. The feature also applies mandatory patches for the base product, thereby creating more certified installations out-of-box.

Action:

For this guide we skip the software updates.

Action:

Select the option 'Install Database software only' and click ' Next> '

Action:

Select 'Real Application Clusters database installation', and select all nodes. Use the 'SSH Connectivity' button to configure/test the passwordless SSH connectivity between your nodes '

Action:

Type in the OS password for the oracle user and click 'Setup'

Action:

click 'OK' and 'Next'

Action:

To confirm English as selected language click ' Next> '

Action:

Make sure radio button 'Enterprise Edition' is ticked, click ' Next> '

Action:

Specify path to your Oracle Base and below to the location where you want to store the software (Oracle home). Click ' Next> '

Action:

Use the drop down menu to select the names of the Database Administrators and Database Operators group and click ' Next> '

Note:

Oracle Universal Installer performs prerequisite checks.

Action:

Check that the status of all checks is 'Succeeded' and click ' Next> ' Note:

If you are sure the unsuccessful checks can be ignored tick the box 'Ignore All' before you click ' Next> '

Action:

Perfrom a last check that the information on the screen is correct before you click ' Finish '

Action:

Log in to a terminal window as root user and run the root.sh script on the first node. When finished do the same for all other nodes in your cluster as well. When finished click 'OK'

NOTE: root.sh should be run on one node at a time.

Action:

Click ' Close ' to finish the installation of the RDBMS Software.

  1. RAC Home Patching

Once the Database software is installed, you will need to apply the 11.2.0.2.2 GI PSU (includes the Database PSU) to the 11.2.0.2 Database Home following the instructions in the GI PSU README. Specifically you will follow section 2 - "Patch Installation and Deinstallation" - Case 2: Patching Oracle RAC Database Homes.

  1. Run ASMCA to create diskgroups

As the grid user start the ASM Configuration Assistant (ASMCA) #su - grid

cd /u01/11.2.0/grid/bin

./asmca

7. RAC Home Patching                                                                                                                           48

Action:

Click 'Create' to create a new diskgroup

Action:

Type in a name for the diskgroup, select the redundancy you want to provide and mark the tick box for the disks you want to assign to the new diskgroup.

Action: Click 'OK'

Action:

Click 'Create' to create the diskgroup for the flash recovery area

Action:

Type in a name for the diskgroup, select the redundancy you want to provide and mark the tick box for the disks you want to assign to the new diskgroup.

Action: Click 'OK'

Action: Click 'Exit'

Action: Click 'Yes' Note:

It is Oracle's Best Practice to have an OCR mirror stored in a second diskgroup. To follow this recommendation add an OCR mirror. Mind that you can only have one OCR in a diskgroup.

Action:

  1. To add OCR mirror to an Oracle ASM diskgroup, ensure that the Oracle Clusterware stack is running and run the following command as root from the GridInfrastructureHome? /bin directory:

  1. # ocrconfig -add +ORADATA

  1. # ocrcheck
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

秒变学霸的18岁码农

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值