概述
上篇文章已经完成了Ceph集群的部署,接下来安装应用。
应用部署
块存储部署
创建存储池,仅在node01节点执行。
创建存储池命令最后的两个数字,比如ceph osd pool create rbdpool 1024 1024中的两个1024分别代表存储池的pg_num和pgp_num,即存储池对应的pg数量。Ceph官方文档建议整个集群所有存储池的pg数量之和大约为:(OSD数量 * 100)/数据冗余因数,数据冗余因数对副本模式而言是副本数,三副本模式是3。这里因为后面还需要创建文件和对象存储服务,所以小于计算的pg数。
[root@node01 ~]# ceph osd pool create rbdpool 1024 1024
pool 'rbdpool' created
[root@node01 ~]# ceph osd pool application enable rbdpool rbd
enabled application 'rbd' on pool 'rbdpool'
[root@node01 ~]# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 627 TiB 625 TiB 2.3 TiB 2.4 TiB 0.39
TOTAL 627 TiB 625 TiB 2.3 TiB 2.4 TiB 0.39
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
rbdpool 1 0 B 0 0 B 0 198 TiB
[root@node01 ~]#
创建块设备,仅在node01节点执行。这里创建了2个大小为1TB的卷,并查看lun0的信息。
[root@node01 ~]# rbd create rbdpool/lun0 --size 1T --image-format 2 --image-feature layering
[root@node01 ~]# rbd create rbdpool/lun1 --size 1T --image-format 2 --image-feature layering
[root@node01 ~]# rbd info rbdpool/lun0
rbd image 'lun0':
size 1 TiB in 262144 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 77972e2984ae
block_name_prefix: rbd_data.77972e2984ae
format: 2
features: layering
op_features:
flags:
create_timestamp: Mon Dec 25 22:36:43 2023
access_timestamp: Mon Dec 25 22:36:43 2023
modify_timestamp: Mon Dec 25 22:36:43 2023
至此,ceph块存储部署完毕。块设备的使用请参考下一节客户机环境搭建及测试。
文件存储部署
本小节内容基于上述块场景部署后操作,也可以单独部署。部署mds服务,在节点node01-node03上分别执行。
[root@node01 ~]# mkdir -p /var/lib/ceph/mds/ceph-`hostname`
[root@node01 ~]# sudo ceph auth get-or-create mds.`hostname` mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' > /var/lib/ceph/mds/ceph-`hostname`/keyring
[root@node01 ~]# sudo systemctl start ceph-mds@`hostname`
[root@node01 ~]# systemctl status ceph-mds@node01
● ceph-mds@node01.service - Ceph metadata server daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mds@.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2023-12-25 23:34:53 CST; 33s ago
Main PID: 225211 (ceph-mds)
Tasks: 15
Memory: 17.2M
CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@node01.service
└─225211 /usr/bin/ceph-mds -f --cluster ceph --id node01 --setuser ceph --setgroup ceph
Dec 25 23:34:53 node01 systemd[1]: Started Ceph metadata server daemon.
Dec 25 23:34:53 node01 ceph-mds[225211]: starting mds.node01 at
[root@node01 ~]#
[root@node01 ~]# ceph -s
cluster:
id: 3fd206a8-b655-4d7d-9dab-b70b6f3e065b
health: HEALTH_OK
services:
mon: 3 daemons, quorum node01,node02,node03 (age 2h)
mgr: node01_mgr(active, starting, since 1.26552s)
mds: 3 up:standby
osd: 80 osds: 80 up (since 89m), 80 in (since 89m)
……
创建存储池和文件系统,在node01上执行。
[root@node01 ~]# ceph osd pool create cephfs_data 1024 1024
pool 'cephfs_data' created
[root@node01 ~]# ceph osd pool create cephfs_metadata 1024 1024
pool 'cephfs_metadata' created
[root@node01 ~]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 3 and data pool 2
[root@node01 ~]# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 627 TiB 625 TiB 2.4 TiB 2.5 TiB 0.39
TOTAL 627 TiB 625 TiB 2.4 TiB 2.5 TiB 0.39
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
rbdpool 1 13 GiB 8.34k 39 GiB 0 198 TiB
cephfs_data 2 0 B 0 0 B 0 198 TiB
cephfs_metadata 3 0 B 0 0 B 0 198 TiB
查看ceph -s集群状态。
[root@node01 ~]# ceph -s
cluster:
id: 3fd206a8-b655-4d7d-9dab-b70b6f3e065b
health: HEALTH_OK
services:
mon: 3 daemons, quorum node01,node02,node03 (age 2h)
mgr: node01_mgr(active, starting, since 0.795456s)
mds: cephfs:1 {0=node02=up:active}