一起来学ceph 02.ceph pool

ceph pool

环境

192.168.126.101 ceph01
192.168.126.102 ceph02
192.168.126.103 ceph03
192.168.126.104 ceph04
192.168.126.105 ceph-admin

192.168.48.11 ceph01
192.168.48.12 ceph02
192.168.48.13 ceph03
192.168.48.14 ceph04
192.168.48.15 ceph-admin
###所有节点内核版本要求4.5以上
uname -r
5.2.2-1.el7.elrepo.x86_64
[root@ceph-admin ~]# ceph -s
  cluster:
    id:     8a83b874-efa4-4655-b070-704e63553839
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 34s)
    mgr: ceph04(active, since 18s), standbys: ceph03
    osd: 8 osds: 8 up (since 20s), 8 in (since 23h)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   8.0 GiB used, 64 GiB / 72 GiB avail
    pgs:     

创建一个pool

pg计算

Total PGs = (Total_number_of_OSD * 100) / max_replication_count
8*100/3/4=66.6667

结算的结果往上取靠近2的N次方的值。比如总共OSD数量是8,复制份数3,pool数量也是4,那么按上述公式计算出的结果是66.6667。取跟它接近的2的N次方是64,那么每个pool分配的PG数量就是64。

[root@ceph-admin ~]# ceph osd pool create pool1 64 64
pool 'pool1' created
[root@ceph-admin ~]# ceph osd pool  ls
pool1

检查pool里已存在的PG和PGP数量

[cephadm@ceph-admin ceph-cluster]$ ceph osd pool get pool1 pg_num
pg_num: 64
[cephadm@ceph-admin ceph-cluster]$ ceph osd pool get pool1 pgp_num
pgp_num: 64

利用api上传一个文件

[cephadm@ceph-admin ceph-cluster]$ echo "this a test" > test.txt
[cephadm@ceph-admin ceph-cluster]$ pwd
/home/cephadm/ceph-cluster
[cephadm@ceph-admin ceph-cluster]$ rados put test.txt  /home/cephadm/ceph-cluster/test.txt --pool=pool1

查看pool中的文件

[cephadm@ceph-admin ceph-cluster]$ rados ls --pool=pool1
test.txt

查看文件在osd位置

[cephadm@ceph-admin ceph-cluster]$ ceph osd map pool1 test.txt
osdmap e43 pool 'pool1' (1) object 'test.txt' -> pg 1.8b0b6108 (1.8) -> up ([5,4,6], p5) acting ([5,4,6], p5)

删除文件

[cephadm@ceph-admin ceph-cluster]$ rados rm test.txt -p pool1

创建rbd

先建立一个pool

[cephadm@ceph-admin ceph-cluster]$ ceph osd pool create pool2 64 64
pool 'pool2' created

启用pool rbd功能

[cephadm@ceph-admin ceph-cluster]$ ceph osd pool application enable pool2 rbd
enabled application 'rbd' on pool 'pool2'

初始化rbd存储池

[cephadm@ceph-admin ceph-cluster]$ rbd pool  init -p pool2

创建image

[cephadm@ceph-admin ceph-cluster]$ rbd create --size 2G pool2/img1
[cephadm@ceph-admin ceph-cluster]$ rbd ls -p pool2
img1
[cephadm@ceph-admin ceph-cluster]$ rbd info pool2/img1
rbd image 'img1':
	size 2 GiB in 512 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: acb9b87fa25e
	block_name_prefix: rbd_data.acb9b87fa25e
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features: 
	flags: 
	create_timestamp: Sat Jul 13 14:55:07 2019
	access_timestamp: Sat Jul 13 14:55:07 2019
	modify_timestamp: Sat Jul 13 14:55:07 2019

创建radosgw

ceph01上创建radosgw进程

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy  rgw create ceph01
[cephadm@ceph-admin ceph-cluster]$ ceph -s
  cluster:
    id:     8a83b874-efa4-4655-b070-704e63553839
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 60m)
    mgr: ceph04(active, since 60m), standbys: ceph03
    osd: 8 osds: 8 up (since 60m), 8 in (since 40h)
    rgw: 1 daemon active (ceph01)
 
  data:
    pools:   6 pools, 160 pgs
    objects: 192 objects, 1.4 KiB
    usage:   8.1 GiB used, 64 GiB / 72 GiB avail
    pgs:     160 active+clean

[cephadm@ceph-admin ceph-cluster]$ ceph osd pool ls
pool1
pool2
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log

访问ceph01:7480

创建cephfs

ceph02创建mds进程

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mds create ceph02
[cephadm@ceph-admin ceph-cluster]$ ceph mds stat
 1 up:standby

创建pool

[cephadm@ceph-admin ceph-cluster]$ ceph osd pool create pool3  64 64 
pool 'pool3' created

[cephadm@ceph-admin 
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值