1 描述
在上一公司的时候遇到过RHCS6.3的管理。当时天天整灾备项目文档的,没时间空出来自己做做RHCS的实验。成为一块心病。
最近正好有时间,把曾经的这个感兴趣的内容实验下来,形成自己的文档。
本文系统环境为RHEL6.5 x86 64 bit。预期RHCS搭建主备模式,服务使用httpd与mountd(自己写的挂载共享磁盘脚本)测试。
RHCS集群各项理论知识本文不做介绍了,可参见后面的资料参考引用结出的链接地址学习。
2 操作环境
描述 | 主机名 | 系统版本 | IP | 备注 |
虚拟结点1 | node1 | rhel-server-6.5-x86_64-dvd | 192.168.65.11 |
|
虚拟结点2 | node2 | rhel-server-6.5-x86_64-dvd | 192.168.65.12 |
|
共享存储虚拟机 | openfiler | openfileresa-2.99.1-x86_64-disc1 | 192.168.65.100 |
|
3 过程设计
openfiler配置部分略过
安装集群软件包(配置yum本地源)
配置集群
测试集群
描述 | node1 | node2 |
/etc/hosts | [root@node1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.65.11 node1 192.168.65.12 node2 192.168.65.100 openfiler [root@node1 ~]# | [root@node2 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.65.11 node1 192.168.65.12 node2 192.168.65.100 openfiler [root@node1 ~]# |
fence设备 | 由于本文环境为虚拟机,无fence设备,故跳过此步骤。 具体的fence设备可根据集群软件列出的类型进行配置。 | 由于本文环境为虚拟机,无fence设备,故跳过此步骤。 具体的fence设备可根据集群软件列出的类型进行配置。 |
eth0 | [root@node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=Ethernet ONBOOT=yes IPADDR=192.168.65.11 NETMASK=255.255.255.0 GATEWAY=192.168.65.1 [root@node1 ~]# | [root@node2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=Ethernet HARDADDR=00:0C:29:A6:C3:85 ONBOOT=yes BOOTPROTO=none IPADDR=192.168.65.12 NETMASK=255.255.255.0 GATEWAY=192.168.65.1 [root@node2 ~]# |
virtual_IP | 192.168.65.15 | 192.168.65.15 |
httpd | /etc/init.d/httpd (httpd本身自启动服务脚本) | /etc/init.d/httpd (httpd本身自启动服务脚本) |
iscsi共享存储磁盘 | /dev/sdb1 | /dev/sdb1 |
mountd服务启动关闭脚本 | [root@node1 ~]# cat /etc/init.d/mountd #!/bin/bash # mountd startup script for RHCS6.5 mount test # # /etc/init.d/mountd # # chkconfig: 2345 02 98 # description: mountd is meant to run under Linux # Source function library.
. /etc/rc.d/init.d/functions #Start mountd service start() { echo -n "Starting mountd Server:" mount /dev/sdb1 /mnt/sdb1 chown -R ricci:ricci /mnt/sdb1 echo } stop() { echo -n "Shutting down mountd Server:" umount /dev/sdb1 /mnt/sdb1 echo } restart() { echo -n "Restart mountd Server" stop start echo } case "$1" in start) start ;; stop) stop exit 0 ;; status) mount |grep '/mnt/sdb1' if [ $? = 0 ] then exit 0 fi ;; restart|reload) stop start ;; *) echo "Usage: $0 {start|stop|reload|restart|status}" exit 1 ;; esac exit 0 [root@node1 ~]# | [root@node2 ~]# cat /etc/init.d/mountd #!/bin/bash # mountd startup script for RHCS6.5 mount test # # /etc/init.d/mountd # # chkconfig: 2345 02 98 # description: mountd is meant to run under Linux # Source function library.
. /etc/rc.d/init.d/functions #Start mountd service start() { echo -n "Starting mountd Server:" mount /dev/sdb1 /mnt/sdb1 chown -R ricci:ricci /mnt/sdb1 echo } stop() { echo -n "Shutting down mountd Server:" umount /dev/sdb1 /mnt/sdb1 echo } restart() { echo -n "Restart mountd Server" stop start echo } case "$1" in start) start ;; stop) stop exit 0 ;; status) mount |grep '/mnt/sdb1' if [ $? = 0 ] then exit 0 fi ;; restart|reload) stop start ;; *) echo "Usage: $0 {start|stop|reload|restart|status}" exit 1 ;; esac exit 0 [root@node1 ~]# |
4 详细步骤操作
4.1 关闭iptables,selinux
[root@node1 ~]# iptables -F
[root@node1 ~]# chkconfig iptables off
[root@node1 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
4.2 配置yum本地源
[root@node1 ~]# mkdir /mnt/cdrom/
[root@node1 ~]# mount /dev/cdrom /mnt/cdrom/
[root@node1 ~]# cat /etc/yum.repos.d/rhel-debuginfo.repo
[rhel_6_iso]
name=local iso
baseurl=file:///mnt/cdrom
gpgcheck=1
gpgkey=file:mnt/cdrom/RPM-GPG-KEY-redhat-release
[rhel_6-HA_iso]
name=local iso
baseurl=file:///mnt/cdrom/HighAvailability
gpgcheck=1
gpgkey=file:mnt/cdrom/RPM-GPG-KEY-redhat-release
[rhel_6-LB_iso]
name=local iso
baseurl=file:///mnt/cdrom/LoadBalancer
gpgcheck=1
gpgkey=file:mnt/cdrom/RPM-GPG-KEY-redhat-release
[rhel_6-RS_iso]
name=local iso
baseurl=file:///mnt/cdrom/ResilientStorage
gpgcheck=1
gpgkey=file:mnt/cdrom/RPM-GPG-KEY-redhat-release
[root@node1 ~]#
4.3 安装HA软件
[root@node1 ~]# yum install cluster-glue resource-agents pacemaker
[root@node1 ~]# yum install luci ricci cman openais rgmanager lvm2-cluster gfs2-utils
4.4 启动HA服务
[root@node1 ~]# service luci start
[root@node1 ~]# service ricci start
[root@node1 ~]# service rgmanager start
[root@node1 ~]# service cman start
4.5 设置HA服务自启动
[root@node1 ~]# chkconfig ricci on
[root@node1 ~]# chkconfig luci on
[root@node1 ~]# chkconfig cman on
[root@node1 ~]# chkconfig rgmanager on
[root@node1 ~]# chkconfi NetworkManager off [该服务必须关闭,否则集群加node会报错]
4.6 修改ricci用户密码
[root@node1 ~]# passwd ricci
以上操作步骤在node1操作完后,在node2也执行相同的操作。
4.7 配置集群
由于两结点都安装了luci集群管理工具。因此可选任意结点做为集群管理服务器。(如机器足够多,可将luci管理工具安装到其他机器上)
登录luci管理界面(luci服务启动后有地址提示)
创建集群
添加结点名称和成员
添加资源
192.168.65.15 为虚拟IP
httpd是web服务
mountd是挂载共享目录服务
添加failover Domains
数字小的优先
创建service group
创建名为web的service group,选择刚创建的failover domains df,把刚创建的ip资源和httpd脚本资源添加进来,如果服务需要使用的资源具有先后关系,那么需要将前提资源以子资源(add a child resource)的形式添加。本例httpd是ip的子资源。
创建名为mountd的service group, 选择刚创建的failover domains df,把刚创建的mountd资源添加进来。
测试
启动service group服务后,会看到service group在哪个结点上运行。
在两个节点上针对httpd写一个index.html的页面,
[root@node1 ~]# echo node1 > /var/www/html/index.html
[root@node2 ~]# echo node2> /var/www/html/index.html
访问这个虚拟IP 192.168.65.15的 apache服务,可以通过访问到的内容来检测集群提供的服务是由哪个节点完成的。
虚拟IP使用ifconfig命令查看不到。需要使用ip命令查看。
[root@node2 ~]# ip addr list
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a6:c3:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.65.12/24 brd 192.168.65.255 scope global eth0
inet 192.168.65.15/24 scope global secondary eth0
inet6 fe80::20c:29ff:fea6:c385/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a6:c3:8f brd ff:ff:ff:ff:ff:ff
inet 10.65.65.12/24 brd 10.65.65.255 scope global eth1
inet6 fe80::20c:29ff:fea6:c38f/64 scope link
valid_lft forever preferred_lft forever
4: eth2: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a6:c3:99 brd ff:ff:ff:ff:ff:ff
inet 192.168.70.135/24 brd 192.168.70.255 scope global eth2
inet6 fe80::20c:29ff:fea6:c399/64 scope link
valid_lft forever preferred_lft forever
[root@node2 ~]#
[root@node1 ~]# mount
/dev/mapper/VolGroup-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
/dev/mapper/VolGroup-lv_home on /home type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
vmware-vmblock on /var/run/vmblock-fuse type fuse.vmware-vmblock (rw,nosuid,nodev,default_permissions,allow_other)
none on /sys/kernel/config type configfs (rw)
gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev)
/dev/sdb1 on /mnt/sdb1 type ext4 (rw)
[root@node1 ~]#
关机测试(将node1节点关机,node2节点接管过来没有问题)
5 个人总结
6 资料参考引用
Oracle9i for RHCS 自启动脚本
http://whr25.blog.sohu.com/76873975.html
使用vi编辑器删除文本中所有空行
http://blog.chinaunix.net/uid-20652643-id-1906281.html
http://book.51cto.com/art/201006/206590.htm
RedHat 6.5下RHCS集群实现
http://www.limingit.com/sitecn/itjq/1645_1694.html
RHEL6.4×64+RHCS+Conga(luci/ricci)+iSCSI+CLVM+gfs2安装配置V1.0
http://www.kankanews.com/ICkengine/archives/103681.shtml
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/11780477/viewspace-1217127/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/11780477/viewspace-1217127/