ceph 进阶
ceph集群应用基础
ceph的集群应用有如下的三个方面:
-
块设备: RBD
-
对象存储: RGW
-
文件存储: Ceph-FS
块设备RBD
RBD是块存储的一种,RBD通过librbd库和ods进行交互,rbd为kvm,k8s等云服务提供高性能和无限拓展的存储后端,这些依赖于qemu和libvirt实用程序和RBD进行集成。
创建RBD
- 创建存储pool
创建存储池,指定 pg 和 pgp 的数量,pgp是对存在于pg的数据进行组合存储,pgp 通常等于 pg 的值
cephstore@ceph-deploy:~/ceph-clusters$ ceph osd pool create myrbd 64 64
cephstore@ceph-deploy:~/ceph-clusters$ ceph osd pool ls
- 启用rbd并初始化
cephstore@ceph-deploy:~/ceph-clusters$ ceph osd pool application enable myrbd rbd
cephstore@ceph-deploy:~/ceph-clusters$ rbd pool init -p myrbd
创建并验证img
rbd 存储池并不能直接用于块设备,而是需要事先在其中按需创建映像(image),并把映像文件作为块设备使用
rbd 命令可用于创建、查看及删除块设备相在的映像(image),以及克隆映像、创建快照、将映像回滚到快照和查看快照等管理操作,例如,下面的命令能够创建一个名为 myimg的映像
- myimg: 用于高内核版本的linux服务器挂载使用
- myimg1: 用于centos系列服务器使用
cephstore@ceph-deploy:~/ceph-clusters$ rbd create myimg --size 3G --pool myrbd
cephstore@ceph-deploy:~/ceph-clusters$ rbd create myimg1 --size 3G --pool myrbd --image-feature layering
cephstore@ceph-deploy:~/ceph-clusters$ rbd ls --pool myrbd
cephstore@ceph-deploy:~/ceph-clusters$ rbd --image myimg --pool myrbd info #查看详情
cephstore@ceph-deploy:~/ceph-clusters$ ceph df
客户端使用块存储
- 客户端安装ceph-common
[root@ceph-client-centos7 ~]# yum install epel-release
[root@ceph-client-centos7 ~]# yum install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y
[root@ceph-client-centos7 ~]# yum install ceph-common -y
- ceph-deploy同步认证文件
cephstore@ceph-deploy:~/ceph-clusters$ scp ceph.conf ceph.client.admin.keyring root@192.168.56.110:/etc/ceph/
- 客户端映射img
[root@ceph-client-centos7 ~]# rbd -p myrbd map myimg #存在异常
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable myrbd/myimg object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
[root@ceph-client-centos7 ~]# rbd -p myrbd map myimg1
/dev/rbd0
- 客户端验证rbd
- 客户端挂载并使用
[root@ceph-client-centos7 ~]# mkdir /data/cephrbd
[root@ceph-client-centos7 ~]# mount /dev/rbd0 /data/cephrbd/
- 测试
[root@ceph-client-centos7 ~]# cd /data/cephrbd/
[root@ceph-client-centos7 cephrbd]# dd if=/dev/zero of=./file bs=1M count=200
文件存储ceph-fs(单节点)
官方文档地址: https://docs.ceph.com/en/latest/cephfs/
-
客户端可以通过ceph协议挂载此文件系统进行读写
-
需要部署和添加ceph-mds服务
-
ceph-mds服务监听在
mon服务
的6789端口 -
依靠metadata和data存储池存储元数据信息和真实数据
-
单节点部署的话存在单点故障,高可用部署在后面介绍
部署和添加MDS服务
- 找一个单独的节点配置清华源和ceph源,目前测试环境配置跟ceph-mon3复用
root@ceph-mon3:~# apt-cache madison ceph-mds
root@ceph-mon3:~# apt install ceph-mds -y
- ceph-deploy节点创建mds服务并加入集群
cephstore@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mon3
- mds状态查看: 存储池未作初始化,mds状态为standby
cephstore@ceph-deploy:~/ceph-cluster$ ceph mds stat
1 up:standby
- 创建ceph-fs的metadata和data存储池
cephstore@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-data 64 64
pool 'cephfs-data' created
cephstore@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-metadata 32 32
pool 'cephfs-metadata' created
- ceph状态查看
- 创建ceph-fs并验证
cephstore@ceph-deploy:~/ceph-cluster$ ceph fs new mycephfs cephfs-metadata cephfs-data
new fs with metadata pool 16 and data pool 15
cephstore@ceph-deploy:~/ceph-cluster$ ceph fs ls # fs查看
name: mycephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
- 状态查看
ceph fs status mycephfs
ceph -s
ceph mds stat
客户端挂载测试
- 准备centos7 和 ubuntu两个测试节点挂载ceph-fs做业务测试
- 两个客户端节点安装nginx将静态文件放到ceph-fs测试
客户端安装ceph-common
# 软件源准备
## centos7
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install epel-release
yum install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y
yum -y install ceph-common
# ubuntu 18.04TLS
wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
sudo echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic main" >> /etc/apt/sources.list
sudo apt update
apt install ceph-common -y
创建客户端挂载
- 将ceph.client.admin.keyring从ceph-deploy拷贝到centos7 客户端
cephstore@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.admin.keyring