针对lvs高可用集群的方法:
heartbeat+ldirectord
生成规则
健康状态检查
corosync+ldirectord
keepalived+ipvs
server:rsync+inotify
client:rsync,sersync
das
nas
san
SCSI:initator
target:
分布式复制块设备:drbd(distributed replicated block device)
raid1:mirror
drbd:主从
primary:可执行读、写操作
secondary:文件系统不能挂载
A:primary
B:secondary
drbd:dual primary(双主模型)
DLM:distributed lock manager
GFS2/OCFS2
磁盘调度器:合并读请求,合并写请求
procotol:
A(async):异步
B(seml sync):半同步
C(sync):同步
drdb source(drdb资源):
资源名称:可以是除了空白字符外的任意acsii码字符
drdb设备名称(可以简单理解为磁盘阵列取得的名字md0等):在双方节点上,此drdb设备的设备文件,一般为/dev/drdbN,其主设备号为147,
磁盘:在双方节点上,各自提供的存储设备
网络配置:双方数据同步时所使用的网络属性
drdb:2.6.33起,整合进内核
下载网站
官方网站www.linbit.com
http://mirros.souhu.com/
drdb配置(这种配置方式只能手动切换主辅节点)
命令:
drbdam
drbdsetup
drbdmeta
for i in {1..2}
do
ssh node$i 'wget ftp://172.16.0.1/pub/sources/drbd/a.rpm'
done
node1:
yum install drbd kmod-drbd
for i in {1..2}
do
ssh node$i 'yum install drdb kmod-drbd'
done
rpm -ql drdb
fdisk /dev/sda -->/dev/sda5
cp /usr/share/drbd.conf /etc/drbd.conf
node2:
yum install drbd komd-drbd
fdisk /dev/sda -->/dev/sda5
两台节点上都要安装上yum install drdb knod
两台节点上都要进行分区做成drdb设备:fdisk并且不要格式化
drdb的配置文件:两台节点配置一样
/etc/drdb.conf(定义资源)
/etc/drdb.d/gloabl_common.conf(定义资源信息)
/etc/drdb.d/resource.d/
vim /etc/drdb.d/gloabl_common.conf
handlers {
pri-on-incon-degr这行的注释拿掉
pri-on-incon-sb注释拿掉
local-io-error注释拿掉
}
disk {
on-io-error detach;
}
net {
cram-hmac-alg ”sha1“;
shaerd-secret“mydrb7tj45”;
}
syncer {
rate 200M;
}
vim /etc/drbd.d/mydrbd.res
resource mydrbd {
on node1.chenjiao.com {
device /dev/drdb0;
disk /dev/sda5;
address 172.16.100.6:7789;
meta-disk internal;
}
on node2.chenjiao.com {
device /dev/drbd0;
disk /dev/sda5;
address 172.16.100.7:7789;
meta-disk internal;
}
}
scp -r /etc/drbd.* node2:/etc
初始化:
node1:drbdadm create-md mydrdb
node2:drbdadm create-md mydrbd
启动:
node1:/etc/init.d/drdb start
node2:/etc/init.d/drdb start
cat /proc/drdb
drbd-overview
node1:(只能在一个节点上面执行,用来定义主节点)drbdadm -- --overwrite-data-of-peer primary mydrbd
watch -n 1 cat /proc/drbd
只能主节点格式化,挂载
mkfs -t ext2 -j /dev/drdb0
mkdir /mydata
mount /dev/drbd0 /mydata
cp /etc/inittab /mydata
umount /mydata
drbdadm secondary mydrbd
drbd-overview
在另一节点上面执行drbd-overview
mkdir /mydata
mount /dev/drbd0 /mydata
drdb+corosync+pacemaker配置(实现主从自动切换)两节点配置一样
让drdb配置成为集群资源
node1:
service drbd stop
yum install corosync corosynclib cluster-glue cluster-glue-libs heartbeat heartbeat-libs pacemaker pacemaker-cts pacemaker-libs resource-agents libesmtp
mkdir /var/log/cluster
cp corosync.conf.example corosync.conf
vim corosync.conf
secauth:on
threads:2
bindnetaddr:172.16.0.0
mcastaddr:239.212.16.19
logging {
to_syslog:no
}
service {
ver:0
name:pacemaker
}
aisexec {
user:root
group:root
}
corosync-Keygen
scp -p authkeys corosync node2:/etc/corosync/
service corosync start
ssh node2 ‘service corosync start’
查看正常启动见其他文档
crm configure
crm(live)configure# verify
crm(live)configure# property stonith-enable=false
crm(live)configure# verify
crm(live)configure# commit
crm(live)confugure# properfy no-quorum-policy=ignore
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# rsc_defaults resource-stickiness=100
crm(live)configure# verify
crm(live)configure# commit
crm(live)ra# provides drbd
crm(live)ra# classes
crm(live)ra# meta ocf:linbit:drbd
crm(live)configure# primitive mysqldrbd ocf:heartbeat:drbd params drbd_resource=mydrbd op start timeout=240 op stop timeout=100 op monitor role=Master interval=20s timeout=30s op monitor role=Slave interval=30s timeout=30s
crm(live)configure# verify
crm(live)configure# ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notity=true
crm(live)configure# commit
drbd-overview
crm node standby
crm node online
crm(live)configure# primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/mydata fstype=ext3 op start timeout=60 op stop timeout=60
crm(live)configure# verify
crm(live)configure# colocation mystore_with_ms_mysqldrbd inf:mystore ms_mysqldrbd:Master
crm(live)configure# order mystore_after_ms_mysqldrbd mandatory:ms_mysqldrbd:promote mystore:start
mkdir /mydata/data
chown -R mysql.mysql /mydata/data
scripts/mysql_install_db --user=mysql --datadir=/mydata/data/
vim /etc/my.cnf
datadir=/mydata/data
crm(live)configure# primitive mysqld lsb:mysqld
crm(live)configure# verify
crm(live)configure# colocation mysqld_with_mystore inf:mysqld mystore
crm(live)configure# order mysqld_after_mystore mandatory:mystore mysqld
crm(live)configure# verify
crm(live)configure# show xml
crm(live)configure# commit
crm(live)configure# primitive myip ocf:heartbeat:IPaddr params ip=172.16.100.1 nic=eth0 cidr_netmask=255.255.0.0
crm(live)configure# colocation myip_with_ms_mysqldrbd inf:ms_mysqldrbd:Master myip
crm(live)configure#commit
node2:
drbd-overview
umount /mydata
drbdadm secondary mydrbd
service drbd stop
yum install corosync corosynclib cluster-glue heartbeat pacemaker resource-agents libesmtp
mkdir /var/log/cluster
DLM:Distributed lock manager(分布式锁管理器)
转载于:https://blog.51cto.com/12406012/2368189