一、MFS安装部署
MooseFS是一个具有容错性的网络分布式文件系统。它把数据分散存放在多个物理服务器上,而呈现给用户的则是一个统一的资源。
网络实验环境
mfsclient 172.25.55.250
mfsmaster 172.25.55.1
mfschunkserver 172.25.55.2 172.25.55.3
采用源码包安装
[root@server1 ~]# ln -s moosefs-3.0.80-1.tar.gz moosefs-3.0.80.tar.gz
[root@server1 ~]# yum install -y rpm-build-4.8.0-37.el6.x86_64
[root@server1 ~]# yum install -y fuse-devel fuse-devel libpcap-devel
[root@server1 ~]# yum install -y lib-devel libpcap-devel
[root@server1 ~]# rpmbuild -tb moosefs-3.0.80.tar.gz #安装编译
master端安装如图软件包:
moosefs-cgiserv 是mfs的web监控页面
[root@server1 x86_64]# pwd
/root/rpmbuild/RPMS/x86_64
[root@server1 x86_64]# ls
moosefs-cgi-3.0.80-1.x86_64.rpm moosefs-client-3.0.80-1.x86_64.rpm
moosefs-cgiserv-3.0.80-1.x86_64.rpm moosefs-master-3.0.80-1.x86_64.rpm
moosefs-chunkserver-3.0.80-1.x86_64.rpm moosefs-metalogger-3.0.80-1.x86_64.rpm
moosefs-cli-3.0.80-1.x86_64.rpm moosefs-netdump-3.0.80-1.x86_64.rpm
[root@server1 x86_64]# yum install -y moosefs-m
moosefs-master-3.0.80-1.x86_64.rpm moosefs-metalogger-3.0.80-1.x86_64.rpm
[root@server1 x86_64]# yum install -y moosefs-master-3.0.80-1.x86_64.rpm moosefs-cgi-3.0.80-1.x86_64.rpm moosefs-cgiserv-3.0.80-1.x86_64.rpm
做本地解析,指定mfsmaster
开启master端,metadata.mfs会变成metdata.mfs.back并会生成changelog.0.mfs
[root@server1 mfs]# ls
metadata.mfs metadata.mfs.empty
[root@server1 mfs]# pwd
/var/lib/mfs
[root@server1 mfs]# ll
总用量 8
-rw-r--r-- 1 mfs mfs 8 8月 25 22:28 metadata.mfs
-rw-r--r-- 1 mfs mfs 8 8月 25 22:11 metadata.mfs.empty
[root@server1 mfs]# mfsmaster
[root@server1 mfs]# ll
总用量 12
-rw-r----- 1 mfs mfs 45 8月 25 22:31 changelog.0.mfs
-rw-r--r-- 1 mfs mfs 8 8月 25 22:28 metadata.mfs.back
-rw-r--r-- 1 mfs mfs 8 8月 25 22:11 metadata.mfs.empty
执行mfscgiserv开启mfs的web服务,并可以在网页查看文件系统的信息:
[root@server1 mfs]# mfscgiserv
lockfile created and locked
starting simple cgi server (host: any , port: 9425 , rootpath: /usr/share/mfscgi)
#端口为9425
2、在数据存储端(chunk):
[root@server2 ~]# yum install -y moosefs-chunkserver-3.0.80-1.x86_64.rpm
[root@server2 ~]# cd /etc/mfs/
[root@server2 mfs]# ls
mfschunkserver.cfg mfshdd.cfg
mfschunkserver.cfg.sample mfshdd.cfg.sample
[root@server2 mfs]# vim mfshdd.cfg
[root@server2 mfs]# vim mfshdd.cfg
[root@server2 mfs]# mkdir /mnt/chunk1
[root@server2 mfs]# ll -d /mnt/chunk1/
drwxr-xr-x 2 root root 4096 8月 25 22:58 /mnt/chunk1/
[root@server2 mfs]# chown mfs.mfs /mnt/chunk1/
[root@server2 mfs]# ll -d /mnt/chunk1/
drwxr-xr-x 2 mfs mfs 4096 8月 25 22:58 /mnt/chunk1/
并加入本地解析指定mfsmaster
完成后执行mfschunkserver开启chunk服务
server3和server2做同样的操作
3、在172.25.55.250安装mfsclient,并编辑启配置文件,指定并建立mfs挂载目录
[root@server1 x86_64]# ls
moosefs-cgi-3.0.80-1.x86_64.rpm moosefs-client-3.0.80-1.x86_64.rpm
moosefs-cgiserv-3.0.80-1.x86_64.rpm moosefs-master-3.0.80-1.x86_64.rpm
moosefs-chunkserver-3.0.80-1.x86_64.rpm moosefs-metalogger-3.0.80-1.x86_64.rpm
moosefs-cli-3.0.80-1.x86_64.rpm moosefs-netdump-3.0.80-1.x86_64.rpm
[root@server1 x86_64]# scp moosefs-client-3.0.80-1.x86_64.rpm root@172.25.55.250:
root@172.25.55.250's password:
moosefs-client-3.0.80-1.x86_64.rpm 100% 265KB 264.9KB/s 00:00
[root@foundation55 ~]# yum install -y moosefs-client-3.0.80-1.x86_64.rpm
完成后执行mfsmount挂载查看
mfs各部分已经部署完毕
端口号简述:(基础理论知识了解一下)
9420 : MFS master和MFS chunck通信端口
9421 : MFS master和MFS Client端通信端口
9419 : MFS master和MFSmetalogger端通信端口
9422 : MFS chunck 和MFS Client端通信端口
9425 : MFS master web界面监听端口,查看整体运行状态
二、MFS的应用
1、测试
在mfs默认挂载目录下建立目录,查看其默认备份的份数,是两份,可以修改
将xiaozhuang1目录设置备份份数为一份
[root@foundation55 mfs]# mfssetgoal -r 1 xiaozhuang1
xiaozhuang1:
inodes with goal changed: 1
inodes with goal not changed: 0
inodes with permission denied: 0
cp文件到目录下查看xiaozhuang1目录是否是备份一份,xiaozhuang2是两份
[root@foundation55 mfs]# mfssetgoal -r 1 xiaozhuang1
xiaozhuang1:
inodes with goal changed: 1
inodes with goal not changed: 0
inodes with permission denied: 0
[root@foundation55 mfs]# cd xiaozhuang1
[root@foundation55 xiaozhuang1]# cp /etc/passwd .
[root@foundation55 xiaozhuang1]# ls
passwd
[root@foundation55 xiaozhuang1]# mfsfileinfo passwd
passwd:
chunk 0: 0000000000000001_00000001 / (id:1 ver:1)
copy 1: 172.25.55.2:9422 (status:VALID)
[root@foundation55 mfs]# ls
xiaozhuang1 xiaozhuang2
[root@foundation55 mfs]# cd xiaozhuang2
[root@foundation55 xiaozhuang2]# cp /etc/fstab .
[root@foundation55 xiaozhuang2]# ls
fstab
[root@foundation55 xiaozhuang2]# mfsfileinfo fstab
fstab:
chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
copy 1: 172.25.55.2:9422 (status:VALID)
copy 2: 172.25.55.3:9422 (status:VALID)
将chunk端down掉,则client端在其上将不能继续存储备份,chunk端恢复后可以继续存储
[root@server3 mfs]# mfschunkserver stop
sending SIGTERM to lock owner (pid:1440)
waiting for termination terminated
[root@server3 mfs]#
3、mfs恢复删除的文件
先删除存储目录下的文件
这里mfsgettrashtime查看文件在垃圾箱的存放时间,文件在删除后不会立即删除,有一个隔离时间(默认是86400s,也就是一天)
建立并挂载MFSmeta文件系统(包含目录trash和trash/undel(用于获取文件)
[root@foundation55 trash]# find -name *passwd
./004/00000004|xiaozhuang1|passwd
[root@foundation55 trash]# mv ./004/00000004\|xiaozhuang1\|passwd undel/
[root@foundation55 trash]#
实现高可用
新增一台主机server4 :172.25.20.4
在server1和server4配置好yum源
下载pacemaker corosync
[root@server4 ~]# yum repolist
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
HighAvailability | 3.9 kB 00:00
HighAvailability/primary_db | 43 kB 00:00
LoadBalancer | 3.9 kB 00:00
LoadBalancer/primary_db | 7.0 kB 00:00
ResilientStorage | 3.9 kB 00:00
ResilientStorage/primary_db | 47 kB 00:00
ScalableFileSystem | 3.9 kB 00:00
ScalableFileSystem/primary_db | 6.8 kB 00:00
rhel6.5 | 3.9 kB 00:00
rhel6.5/primary_db | 3.1 MB 00:00
repo id repo name status
HighAvailability HighAvailability 56
LoadBalancer LoadBalancer 4
ResilientStorage ResilientStorage 62
ScalableFileSystem ScalableFileSystem 7
rhel6.5 rhel6.5 3,690
repolist: 3,819
[root@server4 ~]# yum install -y pacemaker corosync #下载
修改配置文件
[root@server1 ~]# cd /etc/corosync/
[root@server1 corosync]# ls
corosync.conf.example corosync.conf.example.udpu service.d uidgid.d
[root@server1 corosync]# cp corosync.conf.example corosync.conf
[root@server1 corosync]# vim corosync.conf
[root@server1 corosync]# /etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [确定]
[root@server1 corosync]# scp /etc/corosync/corosync.conf server4:/etc/corosync/
root@server4's password:
corosync.conf 100% 483 0.5KB/s 00:00
[root@server1 corosync]# /etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [确定]
[root@server1 corosync]# crm_verify -VL
error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid