1. Glusterfs
两台机器做安装测试:
机器名 | IP | 备注 |
netron | 192.168.1.231 |
|
horizen | 192.168.1.95 |
|
1.1. Basic setting
[root@netron ~]# vi /etc/sysconfig/selinux
SELINUX=disabled
保存。
##使用过高的epel在安装glusterfs时会提示:Error: xz compression not available, 缺少它在安装glusterfs的时候提示有错误。
[root@netron ~]# wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-6-8.noarch.rpm ##
##以下是报错信息
[root@netron ~]# yum install glusterfs-server
Error: Package: glusterfs-server-3.7.5-1.el6.x86_64 (glusterfs-epel)
Requires: liburcu-cds.so.1()(64bit)
Error: Package: glusterfs-server-3.7.5-1.el6.x86_64 (glusterfs-epel)
Requires: pyxattr
Error: Package: glusterfs-server-3.7.5-1.el6.x86_64 (glusterfs-epel)
Requires: liburcu-bp.so.1()(64bit)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles –nodigest
1.2. Install
[root@netron ~]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
[root@netron ~]# yum install glusterfs-server
[root@netron ~]# /etc/init.d/glusterd restart ##启动第一节点的服务
[root@ horizen ~]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
[root@ horizen ~]# yum install glusterfs-server
[root@horizen ~]# /etc/init.d/glusterd restart ##启动第二节点的服务
#在第一节点执行
[root@netron ~]# gluster peer probe horizen
[root@netron ~]# gluster peer status
Number of Peers: 1
Hostname: horizen
Uuid: a0b34d20-191d-4fa6-861a-de58f1fa53f6
State: Peer in Cluster (Connected)
在第二节点执行:
[root@horizen ~]# gluster peer status
Number of Peers: 1
Hostname: 192.168.1.231
Uuid: c83476a2-a903-4a3b-9783-a7ddfeb61486
State: Peer in Cluster (Connected)
Other names:
Netron
##以上命令识别到这两台机器都在集群里,说明配置正常。
##通过机器名创建卷的话,在客户端挂载卷的时候也必须用机器名,否则会报错。
[root@netron ~]# gluster volume create gv0 replica 2 netron:/export/sdb1/brick horizen:/export/sdb1/brick
volume create: gv0: failed: Host netron is not in 'Peer in Cluster' state ceph
由于我是通过机器名字添加的集群节点,所以需要将所有服务器的IP和机器名写入到/etc/hosts文件中,否则解析会失败,创建卷失败。
[root@netron ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.95 horizen
192.168.1.231 netron
[root@horizen ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.231 netron
192.168.1.95 horizen
#创建逻辑卷gv0,检测镜像卷2份;
[root@netron ~]# gluster volume create gv0 replica 2 netron:/export/sdb1/brick horizen:/export/sdb1/brick
[root@ netron ~]# gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID: 83bb57a6-3969-4c44-a601-c3a2646f51cf
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: netron:/export/sdb1/brick
Brick2: horizen:/export/sdb1/brick
Options Reconfigured:
performance.readdir-ahead: on
##启动卷
[root@netron ~]# gluster volume start gv0
volume start: gv0: success
##两台机器上的glusterd和glusterfsd自动开始运行,glusterfsd手动启动失败。
[root@ netron ~]# /etc/init.d/glusterd status
glusterd (pid 6070) 正在运行...
[root@ netron ~]# /etc/init.d/glusterfsd status
glusterfsd (pid 15667) 正在运行...
1.3. Client
[root@client ~]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
[root@client ~]# yum install glusterfs-client
#挂载卷
[root@client ~]# mount.glusterfs netron:/export/sdb1/brick /mnt/glusterfs
##不要在节点上直接写入和删除数据,在客户机上进行文件的操作。
#性能测试
fio --direct=1 --rw=rw --bs=1m --size=5g --numjobs=64 --group_reporting --name=test-rw ##size文件大小,numjob任务次数;
1.4. 配置为云硬盘
配置cinder.conf文件:
Vi cinder.conf
glusterfs_shares_config=/etc/cinder/glusterfs_shares
glusterfs_mount_point_base=$state_path/mnt
volume_driver=cinder.volume.drivers. glusterfs.GlusterfsDriver
[root@glance cinder]# vi /etc/cinder/glusterfs_shares
netron: /export/sdb1/brick
#重启cinder服务,将自动挂载glusterfs提供的磁盘。
1.5. 卷类型
Glusterfs3.2.4/5支持五种Volume,即Distribute卷、Stripe卷、Replica卷、Distribute stripe卷和Distribute replica卷,这五种卷可以满足不同应用对高性能、高可用的需求。
(1)distribute volume:分布式卷,文件通过hash算法分布到brick server上,这种卷是glusterfs的基础和最大特点;
(2)stripe volume:条带卷,类似RAID0,条带数=brick server数量,文件分成数据块以Round Robin方式分布到brick server上,并发粒度是数据块,大文件性能高;
(3)replica volume:镜像卷,类似RAID1,镜像数=brick server数量,所以brick server上文件数据相同,构成n-way镜像,可用性高;
(4)distribute stripe volume:分布式条带卷,brick server数量是条带数的倍数,兼具distribute和stripe卷的特点;
(5)distribute replica volume:分布式镜像卷,brick server数量是镜像数的倍数,兼具distribute和replica卷的特点;
1.1. 常规处理
1.1.1. 删除卷
gluster volume stop gv0 ##停止卷
gluster volume delete gv0 ##删除卷
1.1.2. 将机器移除集群
gluster peer detach ip |hostname
1.1.3. 限制IP访问卷
glusterfsgluster volume set gv0 auth.allow 192.168.1.*
1.1.4. 增加集群机器
根据你使用卷的类型,增加相应的服务器数量;
gluster peer probe IP1
gluster peer probe IP2
gluster volume add-brick gv0 IP1: /export/sdb1/brick IP2:/export/sdb1/brick
1.1.5. 修复GlusterFS磁盘数据
比如在使用IP1的过程总宕机了,使用IP2替换,需要执行数据同步:
gluster volume replace-brick gv0 IP1: /export/sdb1/brick IP2: /export/sdb1/brick commit -force
gluster volume heal gv0 full