高可用集群管理工具的总结

概述

  • 今天我们要说的就是在我们对于集群管理时,可使用的一个方便快捷的管理工具,他可以对我们的服务器进行完成调度,内存控制等一系列的功能
  • 准备工作
  • 三台虚拟机,在三台虚拟机里面,配置yum源如下
 [root@server1 ~]# cat /etc/yum.repos.d/rhel-source.repo 
[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.60.250/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[HighAvailability]
name=Red Hat Enterprise Linux HighAvailability
baseurl=http://172.25.60.250/rhel6.5/HighAvailability
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[LoadBalancer]
name=Red Hat Enterprise Linux LoadBalancer
baseurl=http://172.25.60.250/rhel6.5/LoadBalancer
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[ResilientStorage]
name=Red Hat Enterprise Linux ResilientStorage
baseurl=http://172.25.60.250/rhel6.5/ResilientStorage
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[ScalableFileSystem]
name=Red Hat Enterprise Linux ScalableFileSystem
baseurl=http://172.25.60.250/rhel6.5/ScalableFileSystem
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
  • 今天我们所做的实验在 redhat6.5 上完成,所以配置的yum源为6.5镜像,挂载采用的自己 apache 挂载在 /var/www/html/rhel6.5 下
  • 好了,完成上面的工作,我们可以进行今天的主题,首先,我们要用到的是 server1 server2 两台虚拟机, server3 作为存储装备在后面我们会用到

实验部分

  • server1配置
    -下载安装包
yum install -y luci   #安装ricci的图形化管理界面
[root@server1 ~]# /etc/init.d/luci start
Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `server1' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci):
    (none suitable found, you can still do it manually as mentioned above)

Generating a 2048 bit RSA private key
writing new private key to '/var/lib/luci/certs/host.pem'
Start luci...                                              [  OK  ]
Point your web browser to https://server1:8084 (or equivalent) to access luci
 yum install -y ricci    #管理高可用软件
[root@server1 ~]# cat /etc/redhat-release    #查看版本
Red Hat Enterprise Linux Server release 6.5 (Santiago)
[root@server1 ~]# echo westos | passwd --stdin ricci  #写入密码
Changing password for user ricci.
passwd: all authentication tokens updated successfully.
[root@server1 ~]# /etc/init.d/ricci start   #启动服务
Starting system message bus:                               [  OK  ]
Starting oddjobd:                                         [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]
[root@server1 ~]# chkconfig --list ricci
ricci           0:off   1:off   2:off   3:off   4:off   5:off   6:off
[root@server1 ~]# chkconfig ricci on    #设置开机自启
[root@server1 ~]# chkconfig --list ricci
ricci           0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@server1 ~]# date   #时间同步
Sat Sep 23 22:16:50 CST 2017
[root@server1 ~]# cat /etc/hosts   #本地解析
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.60.1 server1
172.25.60.2 server2
172.25.60.3 server3
172.25.60.4 server4
172.25.60.5 server5
  • server2配置
yum install -y ricci
[root@server2 yum.repos.d]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.5 (Santiago)
[root@server2 yum.repos.d]# echo westos | passwd --stdin ricci
Changing password for user ricci.
passwd: all authentication tokens updated successfully.
[root@server2 yum.repos.d]# /etc/init.d/ricci start
Starting system message bus:                               [  OK  ]
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]
[root@server2 yum.repos.d]# chkconfig --list ricci
ricci           0:off   1:off   2:off   3:off   4:off   5:off   6:off
[root@server2 yum.repos.d]# chkconfig ricci on
[root@server2 yum.repos.d]# chkconfig --list ricci
ricci           0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@server2 yum.repos.d]# date
Sat Sep 23 22:16:45 CST 2017
[root@server2 yum.repos.d]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.60.1 server1
172.25.60.2 server2
172.25.60.3 server3
172.25.60.4 server4
172.25.60.5 server5

1:添加服务器

1:获取证书
这里写图片描述
2:密码登陆
这里写图片描述
3:选择ok
这里写图片描述
4:选择管理员
这里写图片描述
5:创建所要连接的服务器
这里写图片描述
- server1

[root@server1 ~]# 
Broadcast message from root@server1
    (unknown) at 23:40 ...

The system is going down for reboot NOW!
Connection to 172.25.60.1 closed by remote host.
Connection to 172.25.60.1 closed.
[kiosk@foundation60 Desktop]$ ssh root@172.25.60.1
root@172.25.60.1's password: 
Last login: Sat Sep 23 23:25:01 2017 from 172.25.60.250
[root@server1 ~]# /etc/init.d/luci start    #手动
Start luci...                                              [  OK  ]
Point your web browser to https://server1:8084 (or equivalent) to access luci
  • server2
 [root@server2 ~]# 
Broadcast message from root@server2
    (unknown) at 23:39 ...

The system is going down for reboot NOW!
Connection to 172.25.60.2 closed by remote host.
Connection to 172.25.60.2 closed.
[root@server3 yum.repos.d]# ssh root@172.25.60.2
root@172.25.60.2's password: 
Last login: Sat Sep 23 23:25:18 2017 from server3

添加成功
这里写图片描述
命令行查看

[root@server1 ~]# clustat
Cluster Status for westos_dou @ Sat Sep 23 23:50:27 2017
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server1                                     1 Online, Local
 server2                                     2 Online
[root@server1 ~]# cd /etc/cluster/
[root@server1 cluster]# ls
cluster.conf  cman-notify.d
[root@server1 cluster]# cat cluster.conf 
<?xml version="1.0"?>
<cluster config_version="1" name="westos_dou">
    <clusternodes>
        <clusternode name="server1" nodeid="1"/>
        <clusternode name="server2" nodeid="2"/>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices/>
    <rm/>
</cluster>

2:添加fence

why ?
防止一个服务堵塞,替补以为他挂了,然后过会他又活了,造成脑裂
这里写图片描述

[root@server1 cluster]# cat cluster.conf 
<?xml version="1.0"?>
<cluster config_version="2" name="westos_dou">
    <clusternodes>
        <clusternode name="server1" nodeid="1"/>
        <clusternode name="server2" nodeid="2"/>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices>
        <fencedevice agent="fence_xvm" name="vmfence"/>
    </fencedevices>
</cluster>

真机安装

下载虚拟管理软件
yum install -y fence-virtd-multicast.x86_64 fence-virtd-libvirt.x86_64 fence-virtd.x86_64
[root@foundation60 ~]# rm -fr /etc/fence_virt.conf 
[root@foundation60 ~]# fence_virtd -c
Parsing of /etc/fence_virt.conf failed.
Start from scratch [y/N]? y
Module search path [/usr/lib64/fence-virt]: 

Available backends:
    libvirt 0.1
Available listeners:
    multicast 1.2

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]: 

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]: 

Using ipv4 as family.

Multicast IP Port [1229]: 

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

Interface [none]: br0

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]: 

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]: 

Configuration complete.

=== Begin Configuration ===
listeners {
    multicast {
        key_file = "/etc/cluster/fence_xvm.key";
        interface = "br0";
        port = "1229";
        address = "225.0.0.12";
        family = "ipv4";
    }

}

fence_virtd {
    backend = "libvirt";
    listener = "multicast";
    module_path = "/usr/lib64/fence-virt";
}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y
[root@foundation60 ~]# mkdir /etc/cluster/
[root@foundation60 cluster]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1   #取随机数
1+0 records in
1+0 records out
128 bytes (128 B) copied, 0.000152835 s, 838 kB/s
[root@foundation60 cluster]# file /etc/cluster/fence_xvm.key 
/etc/cluster/fence_xvm.key: data
[root@foundation60 cluster]# systemctl restart fence_virtd.service
[root@foundation60 cluster]# netstat -anulp | grep :1229
udp        0      0 0.0.0.0:1229            0.0.0.0:*                           11379/fence_virtd   
scp fence_xvm.key root@172.25.60.1:/etc/cluster/
scp fence_xvm.key root@172.25.60.2:/etc/cluster/

server1 & server2
这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述

[root@server1 cluster]# cat cluster.conf 
<?xml version="1.0"?>
<cluster config_version="6" name="westos_dou">
    <clusternodes>
        <clusternode name="server1" nodeid="1">
            <fence>
                <method name="fence1">
                    <device domain="2d2e2e67-3040-4c42-936c-42dfb6baf85e" name="vmfence"/>
                </method>
            </fence>
        </clusternode>
        <clusternode name="server2" nodeid="2">
            <fence>
                <method name="fence2">
                    <device domain="5e78664f-7f5d-470b-913d-15b613a3f18c" name="vmfence"/>
                </method>
            </fence>
        </clusternode>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices>
        <fencedevice agent="fence_xvm" name="vmfence"/>
    </fencedevices>
</cluster>

fence_node server1 #让server1 挂掉(不复活)

fence_node server2  down server2
[root@foundation60 cluster]# cat /etc/fence_virt.conf 
listeners {
    multicast {
        key_file = "/etc/cluster/fence_xvm.key";
        interface = "br0";
        port = "1229";
        address = "225.0.0.12";
        family = "ipv4";
    }

}

fence_virtd {
    backend = "libvirt";
    listener = "multicast";
    module_path = "/usr/lib64/fence-virt";
}

[root@foundation60 cluster]# brctl show
bridge name bridge id       STP enabled interfaces
br0     8000.54ee756e4c14   no      enp3s0
                            vnet0
                            vnet1
                            vnet2
virbr0      8000.5254001bdfcc   yes     virbr0-nic
virbr1      8000.5254006adda7   yes     virbr1-nic

3:添加服务级别

这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述
test

#1:写测试页
[root@server1 html]# /etc/init.d/httpd stop
Stopping httpd:                                            [  OK  ]
[root@server1 html]# cat /var/www/html/index.html 
server1
[root@server2 html]# /etc/init.d/httpd stop
Stopping httpd:                                            [  OK  ]
[root@server2 html]# cat /var/www/html/index.html 
server2
[root@server1 html]# clustat
Cluster Status for westos_dou @ Sun Sep 24 00:50:12 2017
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server1                                     1 Online, Local, rgmanager
 server2                                     2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:apache                 server1                        started 
#2:关闭server1
[root@server2 html]# clustat
Cluster Status for westos_dou @ Sun Sep 24 00:50:39 2017
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server1                                     1 Online, rgmanager
 server2                                     2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:apache                 server1                        stopping 
 #用主机测试vip漂移
 [root@foundation60 cluster]# curl 172.25.60.100
server1
[root@foundation60 cluster]# curl 172.25.60.100
server2

添加存储

  • 打开server3
  • 制作共享磁盘
  • 这里写图片描述

这里写图片描述

yum install scsi-* -y
[root@server3 ~]# cd /etc/tgt
[root@server3 tgt]# ls
targets.conf
[root@server3 tgt]# vim targets.conf 
[root@server3 tgt]# /etc/init.d/tgtd start
Starting SCSI target daemon:                               [  OK  ]

这里写图片描述

[root@server3 tgt]# tgt-admin -s
Target 1: iqn.2017-09.com.example:server.target1
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags: 
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 8590 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/vdb
            Backing store flags: 
    Account information:
    ACL information:
        172.25.60.1
        172.25.60.2

server1 & server2

[root@server1 ~]# yum install -y iscsi-*
[root@server1 ~]# iscsiadm -m discovery -t st -p 172.25.60.3
Starting iscsid:                                           [  OK  ]
172.25.60.3:3260,1 iqn.2017-09.com.example:server.target1
[root@server1 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2017-09.com.example:server.target1, portal: 172.25.60.3,3260] (multiple)
Login to [iface: default, target: iqn.2017-09.com.example:server.target1, portal: 172.25.60.3,3260] successful.
[root@server1 ~]# fdisk -l

Disk /dev/vda: 21.5 GB, 21474836480 bytes
16 heads, 63 sectors/track, 41610 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0008cb89

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3        1018      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/vda2            1018       41611    20458496   8e  Linux LVM
Partition 2 does not end on cylinder boundary.

Disk /dev/mapper/VolGroup-lv_root: 19.9 GB, 19906166784 bytes
255 heads, 63 sectors/track, 2420 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/VolGroup-lv_swap: 1040 MB, 1040187392 bytes
255 heads, 63 sectors/track, 126 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sda: 8589 MB, 8589934592 bytes
64 heads, 32 sectors/track, 8192 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
[root@server1 ~]# fdisk -cu /dev/sda
#磁盘格式为8e为了可扩充
Disk /dev/sda: 8589 MB, 8589934592 bytes
64 heads, 32 sectors/track, 8192 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x406efa26

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               2        8192     8387584   8e  Linux LVM
[root@server1 ~]# cat /proc/partitions 
major minor  #blocks  name

 252        0   20971520 vda
 252        1     512000 vda1
 252        2   20458496 vda2
 253        0   19439616 dm-0
 253        1    1015808 dm-1
   8        0    8388608 sda
   8        1    8387584 sda1

server2

[root@server2 html]# cat /proc/partitions 
major minor  #blocks  name

 252        0   20971520 vda
 252        1     512000 vda1
 252        2   20458496 vda2
 253        0   19439616 dm-0
 253        1    1015808 dm-1
   8        0    8388608 sda
[root@server2 html]# partprobe 
Warning: WARNING: the kernel failed to re-read the partition table on /dev/vda (Device or resource busy).  As a result, it may not reflect all of your changes until after reboot.
[root@server2 html]# cat /proc/partitions 
major minor  #blocks  name

 252        0   20971520 vda
 252        1     512000 vda1
 252        2   20458496 vda2
 253        0   19439616 dm-0
 253        1    1015808 dm-1
   8        0    8388608 sda
   8        1    8387584 sda1

制造可扩充磁盘,每次制造pv vg lv 都要在server2同步一次

[root@server1 ~]# pvcreate /dev/sda1
  dev_is_mpath: failed to get device for 8:1
  Physical volume "/dev/sda1" successfully created
[root@server1 ~]# vgcreate clustervg /dev/sda1
  Clustered volume group "clustervg" successfully created
[root@server1 html]# lvcreate -L 2G -n demo clustervg
  Logical volume "demo" created
[root@server1 ~]# mkfs.ext4 /dev/clustervg/demo 
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 524288 blocks
26214 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912

Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 25 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@server1 ~]# lvextend -L +2G /dev/clustervg/demo 
  Extending logical volume demo to 4.00 GiB
  Logical volume demo successfully resized
[root@server1 ~]# lvs
  LV      VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup  -wi-ao----  18.54g                                             
  lv_swap VolGroup  -wi-ao---- 992.00m                                             
  demo    clustervg -wi-a-----   4.00g                                             
[root@server1 ~]# resize2fs /dev/clustervg/demo 
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/clustervg/demo to 1048576 (4k) blocks.
The filesystem on /dev/clustervg/demo is now 1048576 blocks long.
root@server1 ~]# vim /etc/lvm/lvm.conf


这里写图片描述

[root@server2 html]# pvs
  PV         VG       Fmt  Attr PSize  PFree
  /dev/vda2  VolGroup lvm2 a--  19.51g    0 
[root@server2 html]# pvs
  PV         VG       Fmt  Attr PSize  PFree
  /dev/sda1           lvm2 a--   8.00g 8.00g
  /dev/vda2  VolGroup lvm2 a--  19.51g    0 
[root@server2 html]# vgs
  VG        #PV #LV #SN Attr   VSize  VFree
  VolGroup    1   2   0 wz--n- 19.51g    0 
  clustervg   1   0   0 wz--nc  8.00g 8.00g
[root@server2 html]# lvs
  LV      VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup  -wi-ao----  18.54g                                             
  lv_swap VolGroup  -wi-ao---- 992.00m                                             
  demo    clustervg -wi-a-----   2.00g            
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值