节点一:
eht0:192.168.4.111
eth1:192.168.1.111
eth2:192.168.1.113
eth3:192.168.2.111
hostname:rhel1
/etc/sysconfig/network
节点二:
eht0:192.168.4.112
eth1:192.168.1.112
eth2:192.168.1.114
eth3:192.168.2.112
openfiler:
eth0:192.168.4.1
eth1:192.168.2.11
hostname:rhel2
/etc/sysconfig/network
DNS服务器:
eth0:192.168.4.102
[root@rhel1 ~]# cat /etc/yum.repos.d/CentOS-Base.repo
[CentOS]
name=CentOS-$releasever - Contrib
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib
baseurl=file:///mnt
gpgcheck=0
enabled=1
#gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5
挂载iscsi磁盘
节点一、节点二(在两边都做)
cd /etc/init.d
[root@rhel1 init.d]#./iscsi start
iscsid is stopped
Turning off network shutdown. Starting iSCSI daemon: [ OK ]
Setting up iSCSI targets: iscsiadm: No records found![ OK ]
[root@rhel1 init.d]# chkconfig --list | grep scsi
iscsi 0:off 1:off 2:off 3:on 4:on 5:on 6:off
iscsid 0:off 1:off 2:off 3:on 4:on 5:on 6:off
探测存储服务器192.168.2.11提供的iSCSI Target:
[root@rhel1 init.d]# iscsiadm -m discovery -t sendtargets -p 192.168.2.11:3260
192.168.2.11:3260,1 iqn.2010-10.example.com:openfiler.ractest.ocrvdisk1
192.168.2.11:3260,1 iqn.2010-10.example.com:openfiler.ractest.ocrvdisk3
192.168.2.11:3260,1 iqn.2010-10.example.com:openfiler.ractest.datafile1
192.168.2.11:3260,1 iqn.2010-10.example.com:openfiler.ractest.fra1
192.168.2.11:3260,1 iqn.2010-10.example.com:openfiler.ractest.ocrvdisk2
挂载磁盘到本地
iscsiadm -m node -T iqn.2013-08.example.com.openfiler.ractest.ocrvdisk1 -p 192.168.2.11:3260 -l
iscsiadm -m node -T iqn.2013-08.example.com.openfiler.ractest.ocrvdisk2 -p 192.168.2.11:3260 -l
iscsiadm -m node -T iqn.2013-08.example.com.openfiler.ractest.ocrvdisk3 -p 192.168.2.11:3260 -l
iscsiadm -m node -T iqn.2013-08.example.com.openfiler.ractest.dbfile1 -p 192.168.2.11:3260 -l
iscsiadm -m node -T iqn.2013-08.example.com.openfiler.ractest.fra1 -p 192.168.2.11:3260 -l
查看本地磁盘信息:
[root@rhel1 init.d]# fdisk –l
配置udev固定iSCSI磁盘设备名称
在/etc/udev/rules.d目录下创建55-openiscsi.rules规则文件,将以下内容保存到文件中:
#/etc/udev/rules.d/55-openiscsi.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c/part%n"
下面是PROGRAM中指定的iscsidev.sh脚本。
在/etc/udev/scripts目录中创建iscsidev.sh脚本,保存以下内容,并添加此脚本的执行权限:
[root@rhel2 rules.d]# cat /etc/udev/scripts/iscsidev.sh
#!/bin/sh
# FILE: /etc/udev/scripts/iscsidev.sh
BUS=${1}
HOST=${BUS%%:*}
[ -e /sys/class/iscsi_host ] || exit 1
file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"
target_name=$(cat ${file})
# This is not an open-scsi drive
if [ -z "${target_name}" ]; then
exit 1
fi
echo "${target_name##*.}"
chmod +x iscsidev.sh
配置yum源
[root@rhel1 yum.repos.d]# cat CentOS-Base.repo
[base]
name=CentOS-$releasever - Base
enabled=1
baseurl=file:///mnt/
gpgcheck=0
安装软件包
rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \
compat-db \
control-center \
gcc \
gcc-c++ \
glibc \
glibc-common \
glibc-devel \
libstdc++ \
libstdc++-devel \
make \
sysstat \
libaio \
compat-libstdc++-33 \
glibc-headers \
kernel-headers \
libXp \
openmotif22 \
compat-libf2c \
compat-libgcc \
libgomp \
libXmu \
elfutils-libelf \
elfutils-libelf-devel \
elfutils-libelf-devel-static \
libaio-devel \
unixODBC \
unixODBC-devel \
libgcc
yum配置遇到问题
http://mirrors.163.com/centos/5/os/i386/repodata/filelists.sqlite.bz2: [Errno -1] Metadata file does not match checksum
Trying other mirror.
Error:failure: repodata/filelists.sqlite.bz2 from base: [Errno 256] No more mirrors to try.
You could try running: package-cleanup --problems
package-cleanup --dupes
rpm -Va --nofiles --nodigest
The program package-cleanup is found in the yum-utils package.
今天遇到这个问题 根据提示试了好长时间 也没弄好。最后终于在官方网上找到类似的问题。以下是解决方法
yum clean all
yum makecache
使用上面的查询命令,compat-libf2c、compat-libgcc两个包即使安装成功(compatlibf2c需要安装32位和64位)
也会提示“is not installed”。使用以下命令进一步确认这两个包
是否被安装,如果能够查询到相应的包即可忽略该提示。
rpm -qa | grep compat-lib
修改系统参数
(1)内核参数调整
在Red Hat Enterprise Server 5.4中,shmmax、shmall参数系统默认已经设置,而且值足
够大,所以这两项不需要再设置。
(2)网络参数设置
编辑/etc/sysctl.conf,加入以下内容:
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=1048576
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576
fs.aio-max-nr=1048576
使更改生效,root用户执行:
sysctl -p
(3)资源限制参数调整
编辑/etc/security/limits.conf,加入以下内容:
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
(4)登录参数调整
编辑/etc/pam.d/login,加入以下内容:
session required /lib64/security/pam_limits.so
(5)/etc/profile配置
编辑/etc/profile,加入以下内容:
if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then
if [ \$SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
配置域名解析服务器:
确保bind包已经成功安装在DNS服务器,如果未安装执行yum install bind命令进行安装。
bind-9.3.3-7.el5
bind-utils-9.3.3-7.el5
bind-libs-9.3.3-7.el5
bind-chroot-9.3.3.7.el5
caching-nameserver-9.3.3-7.el5
创建域
在/var/named/chroot/etc/named.conf文件中加入如下的zone配置:
options {
directory "/var/named";
};
zone "4.168.192.in-addr.arpa." IN {#创建example.com的反向解析
type master;
file "192.168.4.db";
};
zone "example.com." IN {#创建example.com域
type master;
file "example.com.db";
};
在/var/named/chroot/var/named/目录下创建example.com.db文件,加入以下配置:
$TTL 1H
@ IN SOA homeserver.localdomain. root.homeserver.localdomain. ( 5
3H
1H
1W
1H )
@ IN NS homeserver.localdomain.
rhel-cluster-scan.grid IN 1H A 192.168.4.149
IN 1H A 192.168.4.150
IN 1H A 192.168.4.151
Hostserver是DNS服务器的机器名,localdomain是DNS服务器的域名。
在/var/named/chroot/var/named/目录下创建192.168.4.db文件,加入以下的配置:
$TTL 1H
@ IN SOA homeserver.localdomain.grid.example.com. root.homeserver.localdomain.grid.example.com. ( 2
3H
1H
1W
1H )
NS homeserver.localdomain.grid.example.com.
149 PTR rhel-cluster-scan.grid.example.com.
150 PTR rhel-cluster-scan.grid.example.com.
151 PTR rhel-cluster-scan.grid.example.com.
执行以下命令重启DNS服务:
[root@rhel1 Server]# service named restart
Stopping named: [ OK ]
Starting named:
chkconfig --level 35 named on
在节点一和节点二修改DNS客户端的配置
vim /etc/resolv.conf
search grid.example.com example.com
nameserver 192.168.4.102
执行service network restart命令重启network服务才能生效。
编辑/etc/nsswitch.conf文件,找到hosts开头的行,在该行的最后加入nis,修改结果如下:
hosts: files dns nis
验证DNS配置
nslookup rhel-cluster-scan.grid.example.com
节点服务器的配置(两个节点都要配置)
[root@rhel1 ~]# vim /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 rhel1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
# Public Network - (eth0)
192.168.4.111 rhel1 rhel1.localdomain
192.168.4.112 rhel2 rhel2.localdomain
# Private Interconnect - (eth1)
192.168.1.111 rhel1-priv
192.168.1.112 rhel2-priv
# Private Interconnect - (eth2)
192.168.1.113 rhel1-priv
192.168.1.114 rhel2-priv
# Public Virtual IP (VIP) addresses - (eth0:1)
192.168.4.113 rhel1-vip
192.168.4.114 rhel2-vip
# Private Storage Network for Openfiler - (eth2)
192.168.2.11 openfiler1
创建用户和组
groupadd -g 1000 oinstall
groupadd -g 1100 asmadmin
groupadd -g 1200 dba
groupadd -g 1201 oper
groupadd -g 1300 asmdba
groupadd -g 1301 asmoper
useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper grid
useradd -u 1101 -g oinstall -G dba,oper,asmdba oracle
[root@rhel1 ~]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@rhel1 ~]# passwd grid
Changing password for user grid.
New UNIX password:
Retype new UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
创建目录
mkdir -p /u01/app/11.2.0/grid
chown -R grid:oinstall /u01
mkdir /u01/app/oracle
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/
设置环境变量
1. 修改root用户环境变量
修改所有节点root用户$HOME目录下的.bash_profile文件,加入如下的配置:
alias sl='vi /var/log/messages'
alias rpmb='rpm -qa --queryformat %-{name}-%{version}-%{release}-%{arch}"\n"'
source .bash_profile使之生效
2. 修改grid用户环境变量
修改所有节点grid用户$HOME目录下的.bash_profile文件,加入如下配置:
alias ls="ls -FA"
ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME
ORACLE_PATH=/u01/app/oracle/common/oracle/sql; export ORACLE_PATH
ORACLE_TERM=xterm; export ORACLE_TERM
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
PATH=$PATH:$ORACLE_HOME/oracm/bin:$ORACLE_HOME/OPatch
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/ctx/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
export TMP=/tmp
export TMPDIR=/tmp
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
export SQLPATH=~/admin/sql:/$ORACLE_HOME/sqlplus/admin
export NLS_DATE_FORMAT='yyyy-mm-dd hh24:mi:ss'
umask 022
source .bash_profile使之生效
3. 修改oracle用户环境变量
修改所有节点oracle用户$HOME目录下的.bash_profile文件,加入如下配置:
alias ls="ls -FA"
ORACLE_SID=ractest1; export ORACLE_SID
ORACLE_UNQNAME=ractest; export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_PATH=/u01/app/common/oracle/sql; export ORACLE_PATH
ORACLE_TERM=xterm; export ORACLE_TERM
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
PATH=$PATH:$ORACLE_HOME/oracm/bin:$ORACLE_HOME/OPatch
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/ctx/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
export TMP=/tmp
export TMPDIR=/tmp
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
alias sql="sqlplus / as sysdba"
alias al='vi $ORACLE_BASE/admin/$ORACLE_SID/bdump/alert_$ORACLE_SID.log'
alias alt='tail -f
$ORACLE_BASE/admin/$ORACLE_SID/bdump/alert_$ORACLE_SID.log'
export SQLPATH=~/admin/sql:/$ORACLE_HOME/sqlplus/admin
export NLS_DATE_FORMAT='yyyy-mm-dd hh24:mi:ss'
umask 022
source .bash_profile使之生效
配置SSH用户的等效性(oracle,grid两个用户分别做等效)
配置oracle的对等关系
步骤1 在rhel1节点创建私匙和公匙。
在rhel1节点执行以下命令:
[grid@rhel1 ~]$ mkdir ~/.ssh
[grid@rhel1 ~]$ chmod 755 ~/.ssh
[grid@rhel1 ~]$/usr/bin/ssh-keygen -t dsa(一路回车)
[grid@rhel1 ~]$/usr/bin/ssh-keygen -t rsa(一路回车)
步骤2 在rhel2节点创建私匙和公匙。
在rhel2节点执行以下命令:
[grid@rhel2 ~]$ mkdir ~/.ssh
[grid@rhel2 ~]$ chmod 755 ~/.ssh
[grid@rhel2 ~]$ /usr/bin/ssh-keygen -t dsa
[grid@rhel2 ~]$ /usr/bin/ssh-keygen -t rsa
步骤3 将rhel1和rhel2节点的公匙内容拷贝到rhel1节点的authorized_keys文件中。
在rhel1节点执行以下命令:
[grid@rhel1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@rhel1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@rhel1 .ssh]$ scp ~/.ssh/authorized_keys root@rhel2:/home/oracle/.ssh/(用oracle用户传不过去)
在rhel2节点执行一下命令:
[root@rhel2 .ssh]# chown oracle:oinstall authorized_keys
[grid@rhel2 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@rhel2 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[grid@rhel2~]$ scp ~/.ssh/authorized_keys oracle@rhel1:/home/oracle/.ssh/
在节点1、2上修改authorized_keys文件的权限
[grid@rhel1 ~]$ chmod 644 ~/.ssh/authorized_keys
[grid@rhel2 ~]$ chmod 644 ~/.ssh/authorized_keys
步骤5 验证用户等效性。
在两个节点执行以下命令验证用户等效性:
[grid@rhel1 ~]$ ssh rhel1 date
[grid@rhel1 ~]$ ssh rhel2 date
[grid@rhel2 ~]$ ssh rhel1 date
[grid@rhel2 ~]$ ssh rhel2 date
配置grid对等关系
步骤1 在rhel1节点创建私匙和公匙。
在rhel1节点执行以下命令:
[grid@rhel1 ~]$ mkdir ~/.ssh
[grid@rhel1 ~]$ chmod 755 ~/.ssh
[grid@rhel1 ~]$/usr/bin/ssh-keygen -t dsa(一路回车)
[grid@rhel1 ~]$/usr/bin/ssh-keygen -t rsa(一路回车)
步骤2 在rhel2节点创建私匙和公匙。
在rhel2节点执行以下命令:
[grid@rhel2 ~]$ mkdir ~/.ssh
[grid@rhel2 ~]$ chmod 755 ~/.ssh
[grid@rhel2 ~]$ /usr/bin/ssh-keygen -t dsa
[grid@rhel2 ~]$ /usr/bin/ssh-keygen -t rsa
步骤3 将rhel1和rhel2节点的公匙内容拷贝到rhel1节点的authorized_keys文件中。
在rhel1节点执行以下命令:
[grid@rhel1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@rhel1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@rhel1 .ssh]$ scp authorized_keys root@rhel2:/home/grid/.ssh/(用oracle用户传不过去)
在rhel2节点执行一下命令:
[root@rhel2 .ssh]# chown grid:oinstall authorized_keys
[root@rhel2 .ssh]# cat id_dsa.pub >> authorized_keys
[root@rhel2 .ssh]# cat id_rsa.pub >> authorized_keys
[grid@rhel2~]$ scp authorized_keys grid@rhel1:/home/grid/.ssh/
在节点1、2上修改authorized_keys文件的权限
[grid@rhel1 ~]$ chmod 644 ~/.ssh/authorized_keys
[grid@rhel2 ~]$ chmod 644 ~/.ssh/authorized_keys
步骤5 验证用户等效性。
在两个节点执行以下命令验证用户等效性:
[grid@rhel1 ~]$ ssh rhel1 date
[grid@rhel1 ~]$ ssh rhel2 date
[grid@rhel2 ~]$ ssh rhel1 date
[grid@rhel2 ~]$ ssh rhel2 date
配置时间同步服务(采用11G自己的时间同步,需要做的就是关闭ntp时间服务器)
1)停止NTP服务/sbin/service ntpd stop
2)禁用NTP服务器自启动chkconfig ntpd off
3)删除或重命名NTP配置文件rm /etc/ntp.conf
安装cvuqdisk包
上传linux.x64_11gR2_grid.ziplinux.x64_11gR2_database_1of2.zip,linux.x64_11gR2_database_2of2.zip,linux.x64_11gR2_grid.zip
安装cvuqdisk。
解压linux.x64_11gR2_grid.zip文件,在解压的grid/rpm目录下找到cvuqdisk-1.0.7-1.rpm,执行以下cvuqdisk包安装命令:
#CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
#rpm -ivh cvuqdisk-1.0.7-1.rpm
将此文件通过sftp的方式传送到其他集群节点,按照上面的方法安装此包。
CVU验证安装环境
cd /u01/soft/
[root@rhel1 soft]# chown -R grid:oinstall grid/
su - grid
./runcluvfy.sh stage -pre crsinst -n rhel1,rhel2 -fixup –verbose
Check: Membership of user "grid" in group "dba"
Node Name User Exists Group Exists User in Group Comment
---------- ------- ------------ ------------ ---------
rhel2 yes yes no failed
rhel1 yes yes no failed
Result: Membership check for user "grid" in group "dba" failed
验证过程不应存在失败的情况,以上有关用户的验证失败是因为CVU工具不能识别为grid用户指定的asmadmin、asmdba、asmoper组,它依然以dba操作系统组作为判断标准,故报此错,在确保正确设置了grid用户组的情况下可以忽略此错。
创建ASM磁盘
安装ASMLib驱动
安装oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm,
oracleasmlib-2.0.4-1.el5.x86_64.rpm,
oracleasm-support-2.1.3-1.el5.x86_64.rpm
配置ASMLib驱动。(两个节点都要安装配置)
[root@rhel1 soft]# rpm -ivh oracleasm*
warning: oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing... ########################################### [100%]
1:oracleasm-support ########################################### [ 33%]
2:oracleasm-2.6.18-164.el########################################### [ 67%]
3:oracleasmlib ########################################### [100%]
[root@rhel1 soft]# 1/1
-bash: 1/1: 没有那个文件或目录
[root@rhel1 soft]# /etc/init.d/oracleasm configure(两个节点都要配置)
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
创建 ASMLib磁盘
共享磁盘分区(在一个节点即可)
[root@rhel1 soft]# fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1011, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011):
Using default value 1011
Command (m for help):
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
将几个分区都做好
[root@rhel1 soft]#partprobe
其他节点如果看不见就重启服务器
[root@rhel1 iscsi]# cd /dev/iscsi
[root@rhel1 iscsi]# tree
.
|-- dbfile1
| |-- part -> ../../sdf
| `-- part1 -> ../../sdf1
创建ASMLib磁盘。(在一个节点上创建就可以了)
[root@rhel1 iscsi]# /etc/init.d/oracleasm createdisk OCRVDISK1 /dev/iscsi/ocrvdisk1/part1
Marking disk "OCRVDISK1" as an ASM disk: [ OK ]
[root@rhel1 iscsi]# [root@rhel1 soft]# /etc/init.d/oracleasm createdisk OCRVDISK2 /dev/iscsi/ocrvdisk2/part1
-bash: [root@rhel1: command not found
[root@rhel1 iscsi]# /etc/init.d/oracleasm createdisk OCRVDISK2 /dev/iscsi/ocrvdisk2/part1
Marking disk "OCRVDISK2" as an ASM disk: [ OK ]
[root@rhel1 iscsi]# /etc/init.d/oracleasm createdisk OCRVDISK3 /dev/iscsi/ocrvdisk3/part1
Marking disk "OCRVDISK3" as an ASM disk: [ OK ]
[root@rhel1 iscsi]# /etc/init.d/oracleasm createdisk DBFILE1 /dev/iscsi/dbfile1/part1
Marking disk "DBFILE1" as an ASM disk: [ OK ]
[root@rhel1 iscsi]# /etc/init.d/oracleasm createdisk FRA1 /dev/iscsi/fra1/part1
Marking disk "FRA1" as an ASM disk: [ OK ]
[root@rhel1 iscsi]# cd /etc/init.d/
[root@rhel1 init.d]# ./oracleasm listdisks
DBFILE1
FRA1
OCRVDISK1
OCRVDISK2
OCRVDISK3
扫描其它节点,使其可以看到ASM磁盘
[root@rhel2 asmlib]# /etc/init.d/oracleasm scandisks
[root@rhel2 soft]# cd /etc/init.d/
[root@rhel2 init.d]# ./oracleasm listdisks
DBFILE1
FRA1
OCRVDISK1
OCRVDISK2
OCRVDISK3
ASMLib磁盘创建成功之后,会在/dev/oracleasm目录下产生相应的设备文件,在创建ASMLib磁盘组的时候同样可以使用这些设备文件。
[root@rhel1 oracleasm]# pwd
/dev/oracleasm
[root@rhel1 oracleasm]# tree disks
disks
|-- OCRVDISK1
|-- OCRVDISK2
|-- OCRVDISK3
|-- DBFILE1
`-- FRA1
确保这些文件的所有者都是grid:oinstall。
(1)查看ASM磁盘对应的磁盘设备文件
通过oracleasm querydisk -p可以查看创建的ASM磁盘对应的设备文件名称。
[root@rhel2 init.d]# ./oracleasm querydisk -p OCRVDISK1
Disk "OCRVDISK1" is a valid ASM disk
/dev/sda1: LABEL="OCRVDISK1" TYPE="oracleasm"
(2)查看ASMLib的配置信息
配置信息如下:
[root@rhel1 bin]# /usr/sbin/oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=asmadmin
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
安装Grid Infrastructure
#xhost +
#su - grid
$xclock
[grid@rhel1 ~]$ cd /u01/soft/grid/
[grid@rhel1 grid]$ ./runInstaller
安装database DBMS
安装GI遇到突发情况,意外终止了
Using configuration parameter file: /oracle/grid/crs/install/crsconfig_params
CRS is already configured on this node for crshome=0
Cannot configure two CRS instances on the same cluster.
Please deconfigure before proceeding with the configuration of new home.
因为之前已经运行过了。 我们需要把之前注册的信息 删除之后,在运行root.sh 脚本。 执行如下命令,删除节点注册信息:
# /oracle/grid/crs/install/roothas.pl -delete -force -verbose
待删除完成后在执行,就不会报错了。
eht0:192.168.4.111
eth1:192.168.1.111
eth2:192.168.1.113
eth3:192.168.2.111
hostname:rhel1
/etc/sysconfig/network
节点二:
eht0:192.168.4.112
eth1:192.168.1.112
eth2:192.168.1.114
eth3:192.168.2.112
openfiler:
eth0:192.168.4.1
eth1:192.168.2.11
hostname:rhel2
/etc/sysconfig/network
DNS服务器:
eth0:192.168.4.102
[root@rhel1 ~]# cat /etc/yum.repos.d/CentOS-Base.repo
[CentOS]
name=CentOS-$releasever - Contrib
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib
baseurl=file:///mnt
gpgcheck=0
enabled=1
#gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5
挂载iscsi磁盘
节点一、节点二(在两边都做)
cd /etc/init.d
[root@rhel1 init.d]#./iscsi start
iscsid is stopped
Turning off network shutdown. Starting iSCSI daemon: [ OK ]
Setting up iSCSI targets: iscsiadm: No records found![ OK ]
[root@rhel1 init.d]# chkconfig --list | grep scsi
iscsi 0:off 1:off 2:off 3:on 4:on 5:on 6:off
iscsid 0:off 1:off 2:off 3:on 4:on 5:on 6:off
探测存储服务器192.168.2.11提供的iSCSI Target:
[root@rhel1 init.d]# iscsiadm -m discovery -t sendtargets -p 192.168.2.11:3260
192.168.2.11:3260,1 iqn.2010-10.example.com:openfiler.ractest.ocrvdisk1
192.168.2.11:3260,1 iqn.2010-10.example.com:openfiler.ractest.ocrvdisk3
192.168.2.11:3260,1 iqn.2010-10.example.com:openfiler.ractest.datafile1
192.168.2.11:3260,1 iqn.2010-10.example.com:openfiler.ractest.fra1
192.168.2.11:3260,1 iqn.2010-10.example.com:openfiler.ractest.ocrvdisk2
挂载磁盘到本地
iscsiadm -m node -T iqn.2013-08.example.com.openfiler.ractest.ocrvdisk1 -p 192.168.2.11:3260 -l
iscsiadm -m node -T iqn.2013-08.example.com.openfiler.ractest.ocrvdisk2 -p 192.168.2.11:3260 -l
iscsiadm -m node -T iqn.2013-08.example.com.openfiler.ractest.ocrvdisk3 -p 192.168.2.11:3260 -l
iscsiadm -m node -T iqn.2013-08.example.com.openfiler.ractest.dbfile1 -p 192.168.2.11:3260 -l
iscsiadm -m node -T iqn.2013-08.example.com.openfiler.ractest.fra1 -p 192.168.2.11:3260 -l
查看本地磁盘信息:
[root@rhel1 init.d]# fdisk –l
配置udev固定iSCSI磁盘设备名称
在/etc/udev/rules.d目录下创建55-openiscsi.rules规则文件,将以下内容保存到文件中:
#/etc/udev/rules.d/55-openiscsi.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c/part%n"
下面是PROGRAM中指定的iscsidev.sh脚本。
在/etc/udev/scripts目录中创建iscsidev.sh脚本,保存以下内容,并添加此脚本的执行权限:
[root@rhel2 rules.d]# cat /etc/udev/scripts/iscsidev.sh
#!/bin/sh
# FILE: /etc/udev/scripts/iscsidev.sh
BUS=${1}
HOST=${BUS%%:*}
[ -e /sys/class/iscsi_host ] || exit 1
file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"
target_name=$(cat ${file})
# This is not an open-scsi drive
if [ -z "${target_name}" ]; then
exit 1
fi
echo "${target_name##*.}"
chmod +x iscsidev.sh
配置yum源
[root@rhel1 yum.repos.d]# cat CentOS-Base.repo
[base]
name=CentOS-$releasever - Base
enabled=1
baseurl=file:///mnt/
gpgcheck=0
安装软件包
rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \
compat-db \
control-center \
gcc \
gcc-c++ \
glibc \
glibc-common \
glibc-devel \
libstdc++ \
libstdc++-devel \
make \
sysstat \
libaio \
compat-libstdc++-33 \
glibc-headers \
kernel-headers \
libXp \
openmotif22 \
compat-libf2c \
compat-libgcc \
libgomp \
libXmu \
elfutils-libelf \
elfutils-libelf-devel \
elfutils-libelf-devel-static \
libaio-devel \
unixODBC \
unixODBC-devel \
libgcc
yum配置遇到问题
http://mirrors.163.com/centos/5/os/i386/repodata/filelists.sqlite.bz2: [Errno -1] Metadata file does not match checksum
Trying other mirror.
Error:failure: repodata/filelists.sqlite.bz2 from base: [Errno 256] No more mirrors to try.
You could try running: package-cleanup --problems
package-cleanup --dupes
rpm -Va --nofiles --nodigest
The program package-cleanup is found in the yum-utils package.
今天遇到这个问题 根据提示试了好长时间 也没弄好。最后终于在官方网上找到类似的问题。以下是解决方法
yum clean all
yum makecache
使用上面的查询命令,compat-libf2c、compat-libgcc两个包即使安装成功(compatlibf2c需要安装32位和64位)
也会提示“is not installed”。使用以下命令进一步确认这两个包
是否被安装,如果能够查询到相应的包即可忽略该提示。
rpm -qa | grep compat-lib
修改系统参数
(1)内核参数调整
在Red Hat Enterprise Server 5.4中,shmmax、shmall参数系统默认已经设置,而且值足
够大,所以这两项不需要再设置。
(2)网络参数设置
编辑/etc/sysctl.conf,加入以下内容:
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=1048576
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576
fs.aio-max-nr=1048576
使更改生效,root用户执行:
sysctl -p
(3)资源限制参数调整
编辑/etc/security/limits.conf,加入以下内容:
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
(4)登录参数调整
编辑/etc/pam.d/login,加入以下内容:
session required /lib64/security/pam_limits.so
(5)/etc/profile配置
编辑/etc/profile,加入以下内容:
if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then
if [ \$SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
配置域名解析服务器:
确保bind包已经成功安装在DNS服务器,如果未安装执行yum install bind命令进行安装。
bind-9.3.3-7.el5
bind-utils-9.3.3-7.el5
bind-libs-9.3.3-7.el5
bind-chroot-9.3.3.7.el5
caching-nameserver-9.3.3-7.el5
创建域
在/var/named/chroot/etc/named.conf文件中加入如下的zone配置:
options {
directory "/var/named";
};
zone "4.168.192.in-addr.arpa." IN {#创建example.com的反向解析
type master;
file "192.168.4.db";
};
zone "example.com." IN {#创建example.com域
type master;
file "example.com.db";
};
在/var/named/chroot/var/named/目录下创建example.com.db文件,加入以下配置:
$TTL 1H
@ IN SOA homeserver.localdomain. root.homeserver.localdomain. ( 5
3H
1H
1W
1H )
@ IN NS homeserver.localdomain.
rhel-cluster-scan.grid IN 1H A 192.168.4.149
IN 1H A 192.168.4.150
IN 1H A 192.168.4.151
Hostserver是DNS服务器的机器名,localdomain是DNS服务器的域名。
在/var/named/chroot/var/named/目录下创建192.168.4.db文件,加入以下的配置:
$TTL 1H
@ IN SOA homeserver.localdomain.grid.example.com. root.homeserver.localdomain.grid.example.com. ( 2
3H
1H
1W
1H )
NS homeserver.localdomain.grid.example.com.
149 PTR rhel-cluster-scan.grid.example.com.
150 PTR rhel-cluster-scan.grid.example.com.
151 PTR rhel-cluster-scan.grid.example.com.
执行以下命令重启DNS服务:
[root@rhel1 Server]# service named restart
Stopping named: [ OK ]
Starting named:
chkconfig --level 35 named on
在节点一和节点二修改DNS客户端的配置
vim /etc/resolv.conf
search grid.example.com example.com
nameserver 192.168.4.102
执行service network restart命令重启network服务才能生效。
编辑/etc/nsswitch.conf文件,找到hosts开头的行,在该行的最后加入nis,修改结果如下:
hosts: files dns nis
验证DNS配置
nslookup rhel-cluster-scan.grid.example.com
节点服务器的配置(两个节点都要配置)
[root@rhel1 ~]# vim /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 rhel1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
# Public Network - (eth0)
192.168.4.111 rhel1 rhel1.localdomain
192.168.4.112 rhel2 rhel2.localdomain
# Private Interconnect - (eth1)
192.168.1.111 rhel1-priv
192.168.1.112 rhel2-priv
# Private Interconnect - (eth2)
192.168.1.113 rhel1-priv
192.168.1.114 rhel2-priv
# Public Virtual IP (VIP) addresses - (eth0:1)
192.168.4.113 rhel1-vip
192.168.4.114 rhel2-vip
# Private Storage Network for Openfiler - (eth2)
192.168.2.11 openfiler1
创建用户和组
groupadd -g 1000 oinstall
groupadd -g 1100 asmadmin
groupadd -g 1200 dba
groupadd -g 1201 oper
groupadd -g 1300 asmdba
groupadd -g 1301 asmoper
useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper grid
useradd -u 1101 -g oinstall -G dba,oper,asmdba oracle
[root@rhel1 ~]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@rhel1 ~]# passwd grid
Changing password for user grid.
New UNIX password:
Retype new UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
创建目录
mkdir -p /u01/app/11.2.0/grid
chown -R grid:oinstall /u01
mkdir /u01/app/oracle
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/
设置环境变量
1. 修改root用户环境变量
修改所有节点root用户$HOME目录下的.bash_profile文件,加入如下的配置:
alias sl='vi /var/log/messages'
alias rpmb='rpm -qa --queryformat %-{name}-%{version}-%{release}-%{arch}"\n"'
source .bash_profile使之生效
2. 修改grid用户环境变量
修改所有节点grid用户$HOME目录下的.bash_profile文件,加入如下配置:
alias ls="ls -FA"
ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME
ORACLE_PATH=/u01/app/oracle/common/oracle/sql; export ORACLE_PATH
ORACLE_TERM=xterm; export ORACLE_TERM
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
PATH=$PATH:$ORACLE_HOME/oracm/bin:$ORACLE_HOME/OPatch
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/ctx/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
export TMP=/tmp
export TMPDIR=/tmp
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
export SQLPATH=~/admin/sql:/$ORACLE_HOME/sqlplus/admin
export NLS_DATE_FORMAT='yyyy-mm-dd hh24:mi:ss'
umask 022
source .bash_profile使之生效
3. 修改oracle用户环境变量
修改所有节点oracle用户$HOME目录下的.bash_profile文件,加入如下配置:
alias ls="ls -FA"
ORACLE_SID=ractest1; export ORACLE_SID
ORACLE_UNQNAME=ractest; export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_PATH=/u01/app/common/oracle/sql; export ORACLE_PATH
ORACLE_TERM=xterm; export ORACLE_TERM
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
PATH=$PATH:$ORACLE_HOME/oracm/bin:$ORACLE_HOME/OPatch
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/ctx/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
export TMP=/tmp
export TMPDIR=/tmp
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
alias sql="sqlplus / as sysdba"
alias al='vi $ORACLE_BASE/admin/$ORACLE_SID/bdump/alert_$ORACLE_SID.log'
alias alt='tail -f
$ORACLE_BASE/admin/$ORACLE_SID/bdump/alert_$ORACLE_SID.log'
export SQLPATH=~/admin/sql:/$ORACLE_HOME/sqlplus/admin
export NLS_DATE_FORMAT='yyyy-mm-dd hh24:mi:ss'
umask 022
source .bash_profile使之生效
配置SSH用户的等效性(oracle,grid两个用户分别做等效)
配置oracle的对等关系
步骤1 在rhel1节点创建私匙和公匙。
在rhel1节点执行以下命令:
[grid@rhel1 ~]$ mkdir ~/.ssh
[grid@rhel1 ~]$ chmod 755 ~/.ssh
[grid@rhel1 ~]$/usr/bin/ssh-keygen -t dsa(一路回车)
[grid@rhel1 ~]$/usr/bin/ssh-keygen -t rsa(一路回车)
步骤2 在rhel2节点创建私匙和公匙。
在rhel2节点执行以下命令:
[grid@rhel2 ~]$ mkdir ~/.ssh
[grid@rhel2 ~]$ chmod 755 ~/.ssh
[grid@rhel2 ~]$ /usr/bin/ssh-keygen -t dsa
[grid@rhel2 ~]$ /usr/bin/ssh-keygen -t rsa
步骤3 将rhel1和rhel2节点的公匙内容拷贝到rhel1节点的authorized_keys文件中。
在rhel1节点执行以下命令:
[grid@rhel1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@rhel1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@rhel1 .ssh]$ scp ~/.ssh/authorized_keys root@rhel2:/home/oracle/.ssh/(用oracle用户传不过去)
在rhel2节点执行一下命令:
[root@rhel2 .ssh]# chown oracle:oinstall authorized_keys
[grid@rhel2 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@rhel2 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[grid@rhel2~]$ scp ~/.ssh/authorized_keys oracle@rhel1:/home/oracle/.ssh/
在节点1、2上修改authorized_keys文件的权限
[grid@rhel1 ~]$ chmod 644 ~/.ssh/authorized_keys
[grid@rhel2 ~]$ chmod 644 ~/.ssh/authorized_keys
步骤5 验证用户等效性。
在两个节点执行以下命令验证用户等效性:
[grid@rhel1 ~]$ ssh rhel1 date
[grid@rhel1 ~]$ ssh rhel2 date
[grid@rhel2 ~]$ ssh rhel1 date
[grid@rhel2 ~]$ ssh rhel2 date
配置grid对等关系
步骤1 在rhel1节点创建私匙和公匙。
在rhel1节点执行以下命令:
[grid@rhel1 ~]$ mkdir ~/.ssh
[grid@rhel1 ~]$ chmod 755 ~/.ssh
[grid@rhel1 ~]$/usr/bin/ssh-keygen -t dsa(一路回车)
[grid@rhel1 ~]$/usr/bin/ssh-keygen -t rsa(一路回车)
步骤2 在rhel2节点创建私匙和公匙。
在rhel2节点执行以下命令:
[grid@rhel2 ~]$ mkdir ~/.ssh
[grid@rhel2 ~]$ chmod 755 ~/.ssh
[grid@rhel2 ~]$ /usr/bin/ssh-keygen -t dsa
[grid@rhel2 ~]$ /usr/bin/ssh-keygen -t rsa
步骤3 将rhel1和rhel2节点的公匙内容拷贝到rhel1节点的authorized_keys文件中。
在rhel1节点执行以下命令:
[grid@rhel1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@rhel1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@rhel1 .ssh]$ scp authorized_keys root@rhel2:/home/grid/.ssh/(用oracle用户传不过去)
在rhel2节点执行一下命令:
[root@rhel2 .ssh]# chown grid:oinstall authorized_keys
[root@rhel2 .ssh]# cat id_dsa.pub >> authorized_keys
[root@rhel2 .ssh]# cat id_rsa.pub >> authorized_keys
[grid@rhel2~]$ scp authorized_keys grid@rhel1:/home/grid/.ssh/
在节点1、2上修改authorized_keys文件的权限
[grid@rhel1 ~]$ chmod 644 ~/.ssh/authorized_keys
[grid@rhel2 ~]$ chmod 644 ~/.ssh/authorized_keys
步骤5 验证用户等效性。
在两个节点执行以下命令验证用户等效性:
[grid@rhel1 ~]$ ssh rhel1 date
[grid@rhel1 ~]$ ssh rhel2 date
[grid@rhel2 ~]$ ssh rhel1 date
[grid@rhel2 ~]$ ssh rhel2 date
配置时间同步服务(采用11G自己的时间同步,需要做的就是关闭ntp时间服务器)
1)停止NTP服务/sbin/service ntpd stop
2)禁用NTP服务器自启动chkconfig ntpd off
3)删除或重命名NTP配置文件rm /etc/ntp.conf
安装cvuqdisk包
上传linux.x64_11gR2_grid.ziplinux.x64_11gR2_database_1of2.zip,linux.x64_11gR2_database_2of2.zip,linux.x64_11gR2_grid.zip
安装cvuqdisk。
解压linux.x64_11gR2_grid.zip文件,在解压的grid/rpm目录下找到cvuqdisk-1.0.7-1.rpm,执行以下cvuqdisk包安装命令:
#CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
#rpm -ivh cvuqdisk-1.0.7-1.rpm
将此文件通过sftp的方式传送到其他集群节点,按照上面的方法安装此包。
CVU验证安装环境
cd /u01/soft/
[root@rhel1 soft]# chown -R grid:oinstall grid/
su - grid
./runcluvfy.sh stage -pre crsinst -n rhel1,rhel2 -fixup –verbose
Check: Membership of user "grid" in group "dba"
Node Name User Exists Group Exists User in Group Comment
---------- ------- ------------ ------------ ---------
rhel2 yes yes no failed
rhel1 yes yes no failed
Result: Membership check for user "grid" in group "dba" failed
验证过程不应存在失败的情况,以上有关用户的验证失败是因为CVU工具不能识别为grid用户指定的asmadmin、asmdba、asmoper组,它依然以dba操作系统组作为判断标准,故报此错,在确保正确设置了grid用户组的情况下可以忽略此错。
创建ASM磁盘
安装ASMLib驱动
安装oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm,
oracleasmlib-2.0.4-1.el5.x86_64.rpm,
oracleasm-support-2.1.3-1.el5.x86_64.rpm
配置ASMLib驱动。(两个节点都要安装配置)
[root@rhel1 soft]# rpm -ivh oracleasm*
warning: oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing... ########################################### [100%]
1:oracleasm-support ########################################### [ 33%]
2:oracleasm-2.6.18-164.el########################################### [ 67%]
3:oracleasmlib ########################################### [100%]
[root@rhel1 soft]# 1/1
-bash: 1/1: 没有那个文件或目录
[root@rhel1 soft]# /etc/init.d/oracleasm configure(两个节点都要配置)
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
创建 ASMLib磁盘
共享磁盘分区(在一个节点即可)
[root@rhel1 soft]# fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1011, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011):
Using default value 1011
Command (m for help):
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
将几个分区都做好
[root@rhel1 soft]#partprobe
其他节点如果看不见就重启服务器
[root@rhel1 iscsi]# cd /dev/iscsi
[root@rhel1 iscsi]# tree
.
|-- dbfile1
| |-- part -> ../../sdf
| `-- part1 -> ../../sdf1
创建ASMLib磁盘。(在一个节点上创建就可以了)
[root@rhel1 iscsi]# /etc/init.d/oracleasm createdisk OCRVDISK1 /dev/iscsi/ocrvdisk1/part1
Marking disk "OCRVDISK1" as an ASM disk: [ OK ]
[root@rhel1 iscsi]# [root@rhel1 soft]# /etc/init.d/oracleasm createdisk OCRVDISK2 /dev/iscsi/ocrvdisk2/part1
-bash: [root@rhel1: command not found
[root@rhel1 iscsi]# /etc/init.d/oracleasm createdisk OCRVDISK2 /dev/iscsi/ocrvdisk2/part1
Marking disk "OCRVDISK2" as an ASM disk: [ OK ]
[root@rhel1 iscsi]# /etc/init.d/oracleasm createdisk OCRVDISK3 /dev/iscsi/ocrvdisk3/part1
Marking disk "OCRVDISK3" as an ASM disk: [ OK ]
[root@rhel1 iscsi]# /etc/init.d/oracleasm createdisk DBFILE1 /dev/iscsi/dbfile1/part1
Marking disk "DBFILE1" as an ASM disk: [ OK ]
[root@rhel1 iscsi]# /etc/init.d/oracleasm createdisk FRA1 /dev/iscsi/fra1/part1
Marking disk "FRA1" as an ASM disk: [ OK ]
[root@rhel1 iscsi]# cd /etc/init.d/
[root@rhel1 init.d]# ./oracleasm listdisks
DBFILE1
FRA1
OCRVDISK1
OCRVDISK2
OCRVDISK3
扫描其它节点,使其可以看到ASM磁盘
[root@rhel2 asmlib]# /etc/init.d/oracleasm scandisks
[root@rhel2 soft]# cd /etc/init.d/
[root@rhel2 init.d]# ./oracleasm listdisks
DBFILE1
FRA1
OCRVDISK1
OCRVDISK2
OCRVDISK3
ASMLib磁盘创建成功之后,会在/dev/oracleasm目录下产生相应的设备文件,在创建ASMLib磁盘组的时候同样可以使用这些设备文件。
[root@rhel1 oracleasm]# pwd
/dev/oracleasm
[root@rhel1 oracleasm]# tree disks
disks
|-- OCRVDISK1
|-- OCRVDISK2
|-- OCRVDISK3
|-- DBFILE1
`-- FRA1
确保这些文件的所有者都是grid:oinstall。
(1)查看ASM磁盘对应的磁盘设备文件
通过oracleasm querydisk -p可以查看创建的ASM磁盘对应的设备文件名称。
[root@rhel2 init.d]# ./oracleasm querydisk -p OCRVDISK1
Disk "OCRVDISK1" is a valid ASM disk
/dev/sda1: LABEL="OCRVDISK1" TYPE="oracleasm"
(2)查看ASMLib的配置信息
配置信息如下:
[root@rhel1 bin]# /usr/sbin/oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=asmadmin
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
安装Grid Infrastructure
#xhost +
#su - grid
$xclock
[grid@rhel1 ~]$ cd /u01/soft/grid/
[grid@rhel1 grid]$ ./runInstaller
安装database DBMS
安装GI遇到突发情况,意外终止了
Using configuration parameter file: /oracle/grid/crs/install/crsconfig_params
CRS is already configured on this node for crshome=0
Cannot configure two CRS instances on the same cluster.
Please deconfigure before proceeding with the configuration of new home.
因为之前已经运行过了。 我们需要把之前注册的信息 删除之后,在运行root.sh 脚本。 执行如下命令,删除节点注册信息:
# /oracle/grid/crs/install/roothas.pl -delete -force -verbose
待删除完成后在执行,就不会报错了。