ceph基本命令

CEPH基础操作命令

备注:

  1. 表示存储池名称;表示镜像(映像)名称; 表示快照名称
  2. { }表示可选项

1. POOL基本操作命令

1.1 创建POOL
# 语法
ceph osd pool create <poolname> <int[0-]> {<int[0-]>}
# 例子
ceph osd pool create rbd_pool 4 4
1.2 罗列POOL
# 语法
ceph osd pool ls {detail}
# 例子(加上detail可以查看更详细的内容)
ceph osd pool ls
1.3 删除POOL
# 语法
ceph osd pool rm <pool-name>   或者  ceph osd osd pool delete <pool-name>
# 例子
ceph osd pool rm rbd_pool
# 备注:Ceph OSDs 删除存储池的时候,配置文件中需要允许有允许删除存储池的对应配置
# (1) mon_allow_pool_delete = true
# (2) 实际删除时使用如下 ceph osd pool rm <pool-name> <pool-name> {--yes-i-really-mean-it}
# 如:ceph osd pool rm rbd_pool rbd_pool --yes-i-really-mean-it
1.4 POOL使用类型设置
# 备注:
# ceph当中创建的存储池,可以作为cephfs(文件系统),rbd(块设备),rgw(对象网关)三者当中的类型进行使用,不设置不影响使用,只是在检查ceph集群健康状况时会有对应提示

# 语法(取消将enable改为disable)
ceph osd pool application enable <poolname> <app> {--yes-i-really-mean-it}
# 例子
ceph osd pool application enable rbd_pool rbd --yes-i-really-mean-it
1.5 POOL更多操作
# 更多操作,可以使用 ceph osd pool -h 进行查看
# disables use of an application <app> on pool <poolname>
osd pool application disable <poolname> <app> {--yes-i-really-mean-it}                 
# enable use of an application <app> [cephfs,rbd,rgw] on pool <poolname>
osd pool application enable <poolname> <app> {--yes-i-really-mean-it}                   
# get value of key <key> of application <app> on pool <poolname>  
osd pool application get {<poolname>} {<app>} {<key>}                                   
# removes application <app> metadata key <key> on pool <poolname>  
osd pool application rm <poolname> <app> <key>                                         
# sets application <app> metadata key <key> to <value> on pool <poolname>   
osd pool application set <poolname> <app> <key> <value>                                 
# create pool  
osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure} {<erasure_code_profile>} {<rule>} {<int>}
# list pools
osd pool ls {detail}                                                                   
# make snapshot <snap> in <pool>  
osd pool mksnap <poolname> <snap>                                                       
# rename <srcpool> to <destpool>  
osd pool rename <poolname> <poolname>                                                   
# remove pool  
osd pool rm <poolname> {<poolname>} {<sure>}                                           
# remove snapshot <snap> from <pool>   
osd pool rmsnap <poolname> <snap>                                                       
# get pool parameter <var>
osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|auid|target_max_objects|target_max _bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|erasure_code_profile|min_read_recency_for_promote|all|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites     
# set pool parameter <var> to <val> 
osd pool set <poolname> size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_
set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objecs|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites <val> {--yes-i-really-mean-it}

2. CEPHFS文件系统基本操作

2.1 部署 MDS 服务
# 部署完监视器(Mon)和OSD后,需进行必须至少部署一个元数据服务器守护程序才能使用CephFS
ceph-deploy mds create {host-name}[:{daemon-name}] [{host-name}[:{daemon-name}] ...]

ceph-deploy mds create ceph01 ceph02 ceph03
2.2 创建CEPH文件系统

Ceph文件系统至少需要两个RADOS池,一个用于数据,一个用于元数据。配置时,可以考虑:

  • 为元数据池使用更高的复制级别,因为这个池中的任何数据丢失都可能使整个文件系统无法访问。
  • 为元数据池使用低延迟存储(如ssd),因为这将直接影响客户机上文件系统操作的观察延迟.
2.2.1 创建存储池
# 创建cephfs文件系统需要使用两个存储池,用以存放数据和元数据
ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 128
2.2.2 创建文件系统
# 通过 fs new 创建文件系统命令
ceph fs new <fs_name> <metadata> <data>
ceph fs new cephFS cephfs_metadata cephfs_data

# 查看已有的文件系统
[root@node197 ~]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
2.2.3 查看MDS服务器状态
# 查看MDS服务器状态(文件系统创建完毕后, MDS 服务器就能达到 active 状态了)
[root@node210 ~]# ceph mds stat
cephfs-1/1/1 up  {0=node210=up:active(laggy or crashed)}
2.3 查看文件系统状态
[root@node210 ~]# ceph fs status
cephfs - 2 clients
======
+------+--------+---------+---------------+-------+-------+
| Rank | State  |   MDS   |    Activity   |  dns  |  inos |
+------+--------+---------+---------------+-------+-------+
|  0   | active | node210 | Reqs:    0 /s |   28  |   25  |
+------+--------+---------+---------------+-------+-------+
+-----------------+----------+-------+-------+
|       Pool      |   type   |  used | avail |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata | 4983k | 59.5G |
|   cephfs_data   |   data   | 81.7M | 59.5G |
+-----------------+----------+-------+-------+
cephfs2 - 0 clients
===
+------+--------+---------+---------------+-------+-------+
| Rank | State  |   MDS   |    Activity   |  dns  |  inos |
+------+--------+---------+---------------+-------+-------+
|  0   | active | node212 | Reqs:    0 /s |   10  |   12  |
+------+--------+---------+---------------+-------+-------+
+-----------------+----------+-------+-------+
|       Pool      |   type   |  used | avail |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata | 2246  | 59.5G |
|   cephfs_data   |   data   |    0  | 59.5G |
+-----------------+----------+-------+-------+

+-------------+
| Standby MDS |
+-------------+
|   node210   |
+-------------+
MDS version: ceph version 12.2.13 (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)
2.4 挂载文件系统
  1. 在客户端创建挂载目录

    mkdir /mnt/cephfs
    
  2. 以下所有文件系统挂载都是在客户端使用【用内核驱动挂载文件系统】

2.4.1 普通挂载
# 若部署ceph集群的时候,在配置文件中未开启cephx认证,则无需密钥或者密钥配置文件可以直接进行挂载
mount -t ceph ip:port:/ /mnt/cephfs
mount -t ceph 192.168.20.0:6789:/ /mnt/cephfs
2.4.2 通过secretfile挂载
# 若部署ceph集群的时候,配置文件中开启cephx认证,则需密钥或者密钥配置文件可以直接进行挂载
# 获取ceph集群配置文件中的密钥,并存入客户端中
mkdir /etc/ceph
echo 'AQBjVUZeJMQKBRAAh1Lp7p7A0YdBnw7VeZPJtQ==' > /etc/ceph/cephfskey 
# 挂载
mount -t ceph ip:port:/ /mnt/cephfs -o name=xxx,secretfile=/etc/ceph/cephfskey
mount -t ceph 192.168.20.0:6789:/ /mnt/cephfs -o name=xxx,secretfile=/etc/ceph/cephfskey
2.4.3 通过secret挂载
# 若部署ceph集群的时候,配置文件中开启cephx认证,则需密钥或者密钥配置文件可以直接进行挂载
# 获取ceph集群配置文件中的密钥
# 挂载
mount -t ceph ip:port:/ /mnt/cephfs -o name=xxx,secret=AQBjVUZeJMQKBRAAh1Lp7p7A0YdBnw7VeZPJtQ==
mount -t ceph 192.168.20.0:6789:/ /mnt/cephfs -o name=xxx,secret=AQBjVUZeJMQKBRAAh1Lp7p7A0YdBnw7VeZPJtQ==
2.4.4 多个mon节点的挂载
# 挂载多个mon节点,中间以逗号隔开即可
mount -t ceph ip:port,ip:port:/ /mnt/cephfs -o name=xxx,secret=AQBjVUZeJMQKBRAAh1Lp7p7A0YdBnw7VeZPJtQ==
mount -t ceph 192.168.20.0:6789,192.168.20.1:6789:/ /mnt/cephfs -o name=xxx,secret=AQBjVUZeJMQKBRAAh1Lp7p7A0YdBnw7VeZPJtQ==
2.4.5 指定文件系统挂载
# 通过fs或者mds_namespace进行指定文件系统
mount -t ceph ip:port,ip:port:/ /mnt/cephfs -o name=xxx,secret=AQBjVUZeJMQKBRAAh1Lp7p7A0YdBnw7VeZPJtQ==,mds_namespace=xxx
mount -t ceph 192.168.20.0:6789,192.168.20.1:6789:/ /mnt/cephfs -o name=xxx,secret=AQBjVUZeJMQKBRAAh1Lp7p7A0YdBnw7VeZPJtQ==,mds_namespace=xxx
2.5 文件系统挂载查看
# 查看文件系统挂载
[root@test-node210 ~]# df -h /mnt/cephfs
Filesystem             Size  Used Avail Use% Mounted on
192.168.20.0:6789:/  153G     0  153G   0% /mnt/cephfs
2.6 卸载文件系统
# 取消文件系统挂载
umount /mnt/cephfs

3.IMAGE基本操作命令

备注:
ceph image操作时不指定存储池,它将使用默认的 rbd 存储池,但默认并无 rbd 存储池,所以可以创建名为 rbd 的存储池,亦可以使用 rbd pool init 指定存储池为默认存储池(使用中好像未生效)

3.1 创建块设备池和映像
# 创建存储池
ceph osd pool create rbd_pool 100
# 创建镜像
# 语法(此处size的单位为 MiB,具体还得以ceph版本为主,得出的单位可能不一致)
rbd create --size <size> <pool-name>/<image-name>
# 实例
rbd create --size 102400 rbd_pool/foo
3.2 查看映像列表
# 语法
# --format 以什么格式输出,选项有(plain, json, xml),当为json和xml的时候,接上--pretty-format可增强阅读
rbd ls <pool-name> {--format json} {--format json --pretty-format}
# 例子
rbd ls rbd_pool
3.3 修改映像名称
# 语法
rbd rename <pool-name>/<image-name> <pool-name>/<new-image-name>
# 例子
rbd rename rbd_pool/foo rbd_pool/new-foo
3.4 修改映像大小
# 语法(不指定poolname则情况和创建镜像时备注一致)
rbd resize --size 2048 <pool-name>/<image-name> (to increase)
rbd resize --size 2048 <pool-name>/<image-name> --allow-shrink (to decrease)
# 例子
rbd resize --size 20480 rbd_pool/foo
rbd resize --size 2048 rbd_pool/foo --allow-shrink
3.5 查看映像信息
# 语法
# --format 以什么格式输出,选项有(plain, json, xml),当为json和xml的时候,接上--pretty-format可增强阅读
rbd info <pool-name>/<image-name> {--format json --pretty-format}
# 例子
rbd info rbd_pool/foo
# 具体信息如下
[root@node210 ~]# rbd info rbd_pool/foo
rbd image 'foo':
        size 4.4 GiB in 1124 objects				
        order 22 (4 MiB objects)
        id: 4188b2ae8944a
        block_name_prefix: rbd_data.4188b2ae8944a
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Sun Dec  5 22:19:47 2021
# 显示内容注解:
# size:镜像的大小与被分割成的条带数。
# order 22:条带的编号,有效范围是12到25,对应4K到32M,而22代表2的22次方,这样刚好是4M。
# id:镜像的ID标识。
# block_name_prefix:名称前缀。
# format:使用的镜像格式,默认为2。
# features:当前镜像的功能特性。
# op_features:可选的功能特性。

# features说明(内核版本 3.10,仅支持此特性(layering),其它特性需要使用更高版本内核)
# layering: 支持分层
# striping: 支持条带化 v2
# exclusive-lock: 支持独占锁
# object-map: 支持对象映射(依赖 exclusive-lock )
# fast-diff: 快速计算差异(依赖 object-map )
# deep-flatten: 支持快照扁平化操作
# journaling: 支持记录 IO 操作(依赖独占锁)
3.6 删除映像
  1. 映像含有快照则无法被删除(删除前需先删除快照,见快照操作)
  2. 映像被占用了,也无法删除(见查看映像状态)
# 语法(使用remove或者rm都可以,通常不推荐直接删除镜像,下述方法删除镜像后,镜像将不可恢复)
rbd rm <pool-name>/<image-name>		或者 rbd delete <pool-name>/<image-name>
# 例子
rbd rm rbd_pool/foo

# 推荐使用trash命令,这个命令删除是将镜像移动一回收站,如果想找回还可以恢复
rbd trash move <pool-name>/<image-name>
# 查看存储池中的回收站元素
rbd trash ls <pool-name>
# 恢复
rbd trash restore <pool-name>/<id>
# 例
[root@node210 ~]# rbd trash mv rbd_pool/foo
[root@node210 ~]# rbd trash ls rbd_pool/foo
7240e6b8b4567 foo
[root@node210 ~]# rbd trash restore rbd_pool/7240e6b8b4567
3.7 查看映像状态
# 语法
# --format 以什么格式输出,选项有(plain, json, xml),当为json和xml的时候,接上--pretty-format可增强阅读
rbd status <pool-name>/<image-name> {--format json --pretty-format}
# 例子
[root@node210 ~]# rbd status rbd_pool/24b6e41d71d14669b7d25b0ec099badc
Watchers:
        watcher=192.168.20.99:0/219949398 client.373421 cookie=94168476317824
# 此处表示该映像在 192.168.20.99 上有使用(如创建了虚拟机,使用在了数据盘或者光驱当中)

# 去除引用解决方法
# 将该残留的watch信息添加到osd的黑名单
[root@node210 ~]# ceph osd blacklist add 192.168.20.99:0/219949398
blacklisting 192.168.20.99:0/219949398 until 2021-08-13T15:37:09.447761+0800 (3600 sec)
# 再查看watch是否存在
[root@node210 ~]# rbd status rbd_pool/24b6e41d71d14669b7d25b0ec099badc
Watchers: none
# 删除镜像
[root@node192 ~]# rbd rm rbd_pool/24b6e41d71d14669b7d25b0ec099badc
3.8 拷贝映像
# 浅拷贝(浅拷贝不会拷贝映像下的快照)
# 语法(可以拷贝至其他pool中)
rbd copy <pool-name>/<image-name> <pool-name>/<new-image-name>
# 例子
rbd copy rbd_pool/foo rbd_pool/new-foo
# 查看浅拷贝后的快照
[root@node210 ~]# rbd snap ls rbd_pool/new-foo

# 深拷贝(深拷贝会拷贝映像下的快照)
# 语法(可以拷贝至其他pool中)
rbd deep copy <pool-name>/<image-name> <pool-name>/<new-image-name>
# 例子
rbd deep copy rbd_pool/foo rbd_pool/new-foo 
# 查看深拷贝后的快照
[root@node210 ~]# rbd snap ls rbd_pool/new-foo
SNAPID NAME   SIZE TIMESTAMP
     1 a1   10 GiB Tue Dec  7 20:49:10 2021
     2 a2   10 GiB Tue Dec  7 20:49:11 2021
3.9 查看映像实际使用大小
# 使用 rbd du 查询(此命令在 Jewel 版中可用)
# 语法
# --format 以什么格式输出,选项有(plain, json, xml),当为json和xml的时候,接上--pretty-format可增强阅读
rbd disk-usage <pool-name>/<image-name> {--format json --pretty-format}
# 例子
[root@node210 ~]# rbd disk-usage rbd_pool/ww
NAME PROVISIONED    USED
ww       4.4 GiB 4.4 GiB

# 使用 rbd diff 查询
# 语法(默认输出bytes类型,且没有累加,后面可以接awk命令输出想要的格式)
rbd diff <pool-name>/<image-name>
# 例子(显示bytes)
[root@node210 ~]# rbd diff rbd_pool/ww | awk '{ SUM += $2 } END { print SUM " bytes" }'
4712300544 bytes
# 例子(显示KB)
[root@node210 ~]# rbd diff rbd_pool/ww | awk '{ SUM += $2 } END { print SUM/1024 " KB" }'
4601856 KB
# 例子(显示MB)
[root@node210 ~]# rbd diff rbd_pool/ww | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }'
4494 MB
# 例子(显示GB)
[root@node210 ~]# rbd diff rbd_pool/ww | awk '{ SUM += $2 } END { print SUM/1024/1024/1024 " GB" }'
4.38867 GB
3.10 导入/导出映像
# 从文件导入映像
# 语法
rbd import <filepath> <pool-name>/<image-name>
# 例子(将其他格式的映像文件,以标准格式导入(备注:默认格式即为2,亦可以不写))
rbd import --image-format 2 /tmp/img rbd_pool/ww

# 将映像导出到文件
rbd export <pool-name>/<image-name> <filepath> 
# 例子
rbd export rbd_pool/ww /tmp/img

4. SNAPSHOT基本操作命令

备注:ceph block snapshot操作时不指定存储池,它将使用默认的 rbd 存储池,但默认并无 rbd 存储池,所以可以创建名为 rbd 的存储池,亦可以使用 rbd pool init 指定存储池为默认存储池(使用中好像未生效)

4.1 创建块设备池和映像
# 创建存储池
ceph osd pool create rbd_pool 100
# 创建镜像
rbd create --size 102400 rbd_pool/foo
4.2 创建快照
# 语法
rbd snap create <pool-name>/<image-name>@<snap-name>
# 例子
rbd snap create rbd_pool/foo@snapname
4.3 罗列快照
# 语法
# --format 以什么格式输出,选项有(plain, json, xml),当为json和xml的时候,接上--pretty-format可增强阅读
rbd snap ls <pool-name>/<image-name> {--format json --pretty-format}
# 例子
[root@node210 ~]# rbd snap ls rbd_pool/foo
SNAPID NAME   SIZE TIMESTAMP
     1 a1   10 GiB Tue Dec  7 06:49:29 2021
     2 a3   10 GiB Tue Dec  7 20:38:42 2021
4.4 回滚快照
# 语法
rbd snap rollback <pool-name>/<image-name>@<snap-name>
# 例子
rbd snap rollback rbd_pool/foo@snapname
4.5 删除快照
  1. 无法删除受保护的快照(删除前需先取消快照保护)
  2. 若快照有引用(即被克隆使用了),在删除快照前,需将克隆出来的子映像拍平
# 语法
rbd snap rm <pool-name>/<image-name>@<snap-name>
# 例子
rbd snap rm rbd_pool/foo@snapname
# 备注:Ceph OSDs 异步地删除数据,所以删除快照后不会立即释放磁盘空间。
4.6 清除快照

此处删除快照是指:删除某个映像的所有快照(需要取消快照保护)

# 语法
rbd snap purge <pool-name>/<image-name>
# 例子
rbd snap purge rbd_pool/foo
4.7 保护快照
  1. 克隆映像要访问父快照。如果用户不小心删除了父快照,所有克隆映像都会损坏。为防止数据丢失,在克隆前必须先保护快照。
  2. 无法删除受保护的快照
# 语法
rbd snap protect <pool-name>/<image-name>@<snap-name>
# 例子
rbd snap protect rbd_pool/foo@snapname
4.8 克隆快照

克隆前必须先保护快照

# 语法(可以克隆至其他pool中)
rbd clone <pool-name>/<parent-image>@<snap-name> <pool-name>/<child-image-name>
# 例子
rbd clone rbd_pool/foo@snapname rbd_pool/new_foo
# 备注:可以把某个存储池中映像的快照克隆到另一存储池。例如,可以把某一存储池中的只读映像及其快照作为模板维护,把可写克隆置于另一存储池。
4.9 取消保护快照
  1. 删除快照前,必须先取消保护

  2. 不可以删除被克隆映像引用的快照,所以在你删除快照前,必须先拍平( flatten )此快照的各个克隆

# 语法
rbd snap unprotect <pool-name>/<image-name>@<snap-name>
# 例子
rbd snap unprotect rbd_pool/foo@snapname
4.10 罗列快照的子孙

克隆前必须先保护快照

# 语法
rbd children <pool-name>/<image-name>@<snap-name>
# 例子
[root@node210 ~]# rbd children rbd_pool/foo@a
rbd_pool/a!
4.11 拍平克隆映像
  1. 克隆出来的映像仍保留了对父快照的引用。要从子克隆删除这些到父快照的引用,可以把快照的信息复制给子克隆,也就是“拍平”它。拍平克隆映像的时间随快照尺寸增大而增加。要删除快照,必须先拍平子映像
# 语法
rbd flatten <pool-name>/<image-name>
# 例子
rbd flatten rbd_pool/foo

5. mon节点基本操作命令

5.1 查看mon节点信息
# 查看mon节点信息
[root@node210 ~]# ceph mon dump
dumped monmap epoch 1
epoch 1
fsid 7c9c2ba8-dcec-42b0-8231-a2149988b913
last_changed 2021-11-30 21:07:24.998343
created 2021-11-30 21:07:24.998343
0: 192.168.20.210:8000/0 mon.node210
1: 192.168.20.211:8000/0 mon.node211
2: 192.168.20.212:8000/0 mon.node212

# 查看mon节点信息(以json格式输出)
[root@node210 ~]# ceph mon dump -f json
{
	"epoch": 1,
	"fsid": "7c9c2ba8-dcec-42b0-8231-a2149988b913",
	"modified": "2021-11-30 21:07:24.998343",
	"created": "2021-11-30 21:07:24.998343",
	"features": {
		"persistent": ["kraken", "luminous", "mimic", "osdmap-prune"],
		"optional": []
	},
	"mons": [{
		"rank": 0,
		"name": "node210",
		"addr": "192.168.20.210:8000/0",
		"public_addr": "192.168.20.210:8000/0"
	}, {
		"rank": 1,
		"name": "node211",
		"addr": "192.168.20.211:8000/0",
		"public_addr": "192.168.20.211:8000/0"
	}, {
		"rank": 2,
		"name": "node212",
		"addr": "192.168.20.212:8000/0",
		"public_addr": "192.168.20.212:8000/0"
	}],
	"quorum": [0, 1, 2]
}

5.2 查看mon节点状态
5.2.1 简洁信息查看
[root@node210 ~]# ceph mon stat
e1: 3 mons at {node210=192.168.20.210:8000/0,node211=192.168.20.211:8000/0,node212=192.168.20.212:8000/0}, election epoch 56, leader 0 node210, quorum 0,1 node210,node211, out of quorum 2 node212
# 以上信息显示中,表示该ceph集群有三个mon节点,其节点名称、IP、端口均显示出来,其他字段意思如下:
# leader 0 node210 表示该ceph集群中的主mon节点是:rank值为0,节点名称为node210的节点
# quorum 0,1 node210,node211 表示:rank值为0和1的节点在该ceph集群中
# out of quorum 2 node212 表示:rank值为2的节点不在该ceph集群中(可理解为该节点断开或者mon服务未启动)
5.2.2 详细信息查看
# 查看mon节点状态
[root@node210 ~]# ceph mon_status
{
	"name": "node210",                   # 节点名称
	"rank": 0,							 # 节点rank值,类似uuid
	"state": "leader",					 # 节点状态:主节点为leader,其余为open
	"election_epoch": 56,
	"quorum": [0, 1, 2],
	"features": {
		"required_con": "144115738102218752",
		"required_mon": ["kraken", "luminous", "mimic", "osdmap-prune"],
		"quorum_con": "4611087854031667195",
		"quorum_mon": ["kraken", "luminous", "mimic", "osdmap-prune"]
	},
	"outside_quorum": [],
	"extra_probe_peers": [],
	"sync_provider": [],
	"monmap": {
		"epoch": 1,
		"fsid": "7c9c2ba8-dcec-42b0-8231-a2149988b913",
		"modified": "2021-11-30 21:07:24.998343",
		"created": "2021-11-30 21:07:24.998343",
		"features": {
			"persistent": ["kraken", "luminous", "mimic", "osdmap-prune"],
			"optional": []
		},
		"mons": [{
			"rank": 0,
			"name": "node210",
			"addr": "192.168.20.210:8000/0",
			"public_addr": "192.168.20.210:8000/0"
		}, {
			"rank": 1,
			"name": "node211",
			"addr": "192.168.20.211:8000/0",
			"public_addr": "192.168.20.211:8000/0"
		}, {
			"rank": 2,
			"name": "node212",
			"addr": "192.168.20.212:8000/0",
			"public_addr": "192.168.20.212:8000/0"
		}]
	},
	"feature_map": {
		"mon": [{
			"features": "0x3ffddff8ffacfffb",
			"release": "luminous",
			"num": 1
		}],
		"mds": [{
			"features": "0x3ffddff8ffacfffb",
			"release": "luminous",
			"num": 1
		}],
		"client": [{
			"features": "0x7fddff8ef8cbffb",
			"release": "jewel",
			"num": 13
		}, {
			"features": "0x3ffddff8ffacfffb",
			"release": "luminous",
			"num": 1
		}],
		"mgr": [{
			"features": "0x3ffddff8ffacfffb",
			"release": "luminous",
			"num": 1
		}]
	}
}

6. osd节点基本操作命令

6.1 查看osd信息
# 查看osd信息(以json格式输出)
[root@node210 ~]# ceph osd dump -f json
{
	"epoch": 69,
	"fsid": "7c9c2ba8-dcec-42b0-8231-a2149988b913",
	"created": "2021-11-30 21:19:44.366948",
	"modified": "2021-12-07 20:50:38.927353",
	"flags": "sortbitwise,recovery_deletes,purged_snapdirs",
	"flags_num": 1605632,
	"flags_set": ["purged_snapdirs", "recovery_deletes", "sortbitwise"],
	"crush_version": 7,
	"full_ratio": 0.950000,
	"backfillfull_ratio": 0.900000,
	"nearfull_ratio": 0.850000,
	"cluster_snapshot": "",
	"pool_max": 6,
	"max_osd": 3,
	"require_min_compat_client": "jewel",
	"min_compat_client": "jewel",
	"require_osd_release": "mimic",
	"pools": [{
		"pool": 1,
		"pool_name": "cephfs_data",
		"create_time": "2021-11-30 21:29:38.362686",
		"flags": 1,
		"flags_names": "hashpspool",
		"type": 1,
		"size": 3,
		"min_size": 2,
		"crush_rule": 0,
		"object_hash": 2,
		"pg_num": 16,
		"pg_placement_num": 16,
		"last_change": "20",
		"last_force_op_resend": "0",
		"last_force_op_resend_preluminous": "0",
		"auid": 0,
		"snap_mode": "selfmanaged",
		"snap_seq": 0,
		"snap_epoch": 0,
		"pool_snaps": [],
		"removed_snaps": "[]",
		"quota_max_bytes": 0,
		"quota_max_objects": 0,
		"tiers": [],
		"tier_of": -1,
		"read_tier": -1,
		"write_tier": -1,
		"cache_mode": "none",
		"target_max_bytes": 0,
		"target_max_objects": 0,
		"cache_target_dirty_ratio_micro": 400000,
		"cache_target_dirty_high_ratio_micro": 600000,
		"cache_target_full_ratio_micro": 800000,
		"cache_min_flush_age": 0,
		"cache_min_evict_age": 0,
		"erasure_code_profile": "",
		"hit_set_params": {"type": "none"},
		"hit_set_period": 0,
		"hit_set_count": 0,
		"use_gmt_hitset": true,
		"min_read_recency_for_promote": 0,
		"min_write_recency_for_promote": 0,
		"hit_set_grade_decay_rate": 0,
		"hit_set_search_last_n": 0,
		"grade_table": [],
		"stripe_width": 0,
		"expected_num_objects": 0,
		"fast_read": false,
		"options": {},
		"application_metadata": {"cephfs": {"data": "cephfs"}}}],
	"osds": [{
		"osd": 0,
		"uuid": "ff911e62-f88e-4fa4-bb86-c8b8b9eb8ade",
		"up": 1,
		"in": 1,
		"weight": 1.000000,
		"primary_affinity": 1.000000,
		"last_clean_begin": 5,
		"last_clean_end": 32,
		"up_from": 35,
		"up_thru": 35,
		"down_at": 34,
		"lost_at": 0,
		"public_addr": "192.168.20.210:6800/1290",
		"cluster_addr": "192.168.20.210:6801/1290",
		"heartbeat_back_addr": "192.168.20.210:6802/1290",
		"heartbeat_front_addr": "192.168.20.210:6803/1290",
		"state": ["exists", "up"]
	}, {
		"osd": 1,
		"uuid": "e59f43a9-34f0-4580-8a2e-0a65c6251b7f",
		"up": 1,
		"in": 1,
		"weight": 1.000000,
		"primary_affinity": 1.000000,
		"last_clean_begin": 9,
		"last_clean_end": 32,
		"up_from": 35,
		"up_thru": 35,
		"down_at": 34,
		"lost_at": 0,
		"public_addr": "192.168.20.211:6800/1245",
		"cluster_addr": "192.168.20.211:6801/1245",
		"heartbeat_back_addr": "192.168.20.211:6802/1245",
		"heartbeat_front_addr": "192.168.20.211:6803/1245",
		"state": ["exists", "up"]
	}, {
		"osd": 2,
		"uuid": "7da11eaa-26f2-4898-9160-75301595e111",
		"up": 1,
		"in": 1,
		"weight": 1.000000,
		"primary_affinity": 1.000000,
		"last_clean_begin": 13,
		"last_clean_end": 32,
		"up_from": 35,
		"up_thru": 35,
		"down_at": 34,
		"lost_at": 0,
		"public_addr": "192.168.20.212:6800/1299",
		"cluster_addr": "192.168.20.212:6801/1299",
		"heartbeat_back_addr": "192.168.20.212:6802/1299",
		"heartbeat_front_addr": "192.168.20.212:6803/1299",
		"state": ["exists", "up"]
	}],
	"osd_xinfo": [{
		"osd": 0,
		"down_stamp": "2021-12-01 22:41:10.601553",
		"laggy_probability": 0.000000,
		"laggy_interval": 0,
		"features": 4611087854031667195,
		"old_weight": 0
	}, {
		"osd": 1,
		"down_stamp": "2021-12-01 22:41:10.601553",
		"laggy_probability": 0.000000,
		"laggy_interval": 0,
		"features": 4611087854031667195,
		"old_weight": 0
	}, {
		"osd": 2,
		"down_stamp": "2021-12-01 22:41:10.601553",
		"laggy_probability": 0.000000,
		"laggy_interval": 0,
		"features": 4611087854031667195,
		"old_weight": 0
	}],
	"pg_upmap": [],
	"pg_upmap_items": [],
	"pg_temp": [],
	"primary_temp": [],
	"blacklist": {},
	"erasure_code_profiles": {
		"default": {
			"k": "2",
			"m": "1",
			"plugin": "jerasure",
			"technique": "reed_sol_van"
		}
	},
	"removed_snaps_queue": [],
	"new_removed_snaps": [],
	"new_purged_snaps": [{
		"pool": 4,
		"snaps": [{
			"begin": 5,
			"length": 1
		}]
	}]
}
6.2 查看osd状态
6.2.1 简洁信息查看
[root@node210 ~]# ceph osd stat
3 osds: 3 up, 3 in; epoch: e69
6.2.2 详细信息查看
# 查看osd状态
[root@node210 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF
-1       3.00000 root default
-3       1.00000     host node210
 0   hdd 1.00000         osd.0        up  1.00000 1.00000
-5       1.00000     host node211
 1   hdd 1.00000         osd.1        up  1.00000 1.00000
-7       1.00000     host node212
 2   hdd 1.00000         osd.2        up  1.00000 1.00000

7. 空间大小查看

7.1 使用rados查看
7.1.1 简洁版
[root@node210 ~]# rados df detail
POOL_NAME               USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED   RD_OPS      RD WR_OPS     WR
cephfs_data              0 B       0      0      0                  0       0        0        0     0 B      0    0 B
cephfs_metadata      2.2 KiB      22      0     66                  0       0        0       49  51 KiB     46 13 KiB
data-cloud-disk-pool  21 GiB    5338      0  16014                  0       0        0 12239322 300 MiB  20723 21 GiB
mirror-cache-pool      710 B       6      0     18                  0       0        0      383 301 KiB     59 27 KiB
mirror-pool           24 GiB    6168      0  18504                  0       0        0  1144285  37 GiB  43774 27 GiB
root-cloud-disk-pool  20 GiB    5328      0  15984                  0       0        0  2148810  85 GiB 175592 26 GiB

total_objects    16862
total_used       194 GiB
total_avail      2.8 TiB
total_space      3.0 TiB
7.1.2 输出json格式
# 输出json格式
[root@node210 ~]# rados df detail -f json
{
	"pools": [{
		"name": "cephfs_data",			# 存储池名称
		"id": 1,						# 存储池id
		"size_bytes": 0,				# 存储池使用空间大小(bytes类型)
		"size_kb": 0,					# 存储池使用空间大小(kb类型)
		"num_objects": 0,
		"num_object_clones": 0,
		"num_object_copies": 0,
		"num_objects_missing_on_primary": 0,
		"num_objects_unfound": 0,
		"num_objects_degraded": 0,
		"read_ops": 0,
		"read_bytes": 0,
		"write_ops": 0,
		"write_bytes": 0
	}, {
		"name": "cephfs_metadata",
		"id": 2,
		"size_bytes": 2286,
		"size_kb": 3,
		"num_objects": 22,
		"num_object_clones": 0,
		"num_object_copies": 66,
		"num_objects_missing_on_primary": 0,
		"num_objects_unfound": 0,
		"num_objects_degraded": 0,
		"read_ops": 49,
		"read_bytes": 52224,
		"write_ops": 46,
		"write_bytes": 13312
	}, {
		"name": "data-cloud-disk-pool",
		"id": 5,
		"size_bytes": 22045532128,
		"size_kb": 21528840,
		"num_objects": 5338,
		"num_object_clones": 0,
		"num_object_copies": 16014,
		"num_objects_missing_on_primary": 0,
		"num_objects_unfound": 0,
		"num_objects_degraded": 0,
		"read_ops": 12343288,
		"read_bytes": 317519872,
		"write_ops": 20723,
		"write_bytes": 22069889024
	}, {
		"name": "mirror-cache-pool",
		"id": 4,
		"size_bytes": 710,
		"size_kb": 1,
		"num_objects": 6,
		"num_object_clones": 0,
		"num_object_copies": 18,
		"num_objects_missing_on_primary": 0,
		"num_objects_unfound": 0,
		"num_objects_degraded": 0,
		"read_ops": 383,
		"read_bytes": 308224,
		"write_ops": 59,
		"write_bytes": 27648
	}, {
		"name": "mirror-pool",
		"id": 3,
		"size_bytes": 25427801544,
		"size_kb": 24831838,
		"num_objects": 6168,
		"num_object_clones": 0,
		"num_object_copies": 18504,
		"num_objects_missing_on_primary": 0,
		"num_objects_unfound": 0,
		"num_objects_degraded": 0,
		"read_ops": 1154726,
		"read_bytes": 41335623680,
		"write_ops": 43774,
		"write_bytes": 29125074944
	}, {
		"name": "root-cloud-disk-pool",
		"id": 6,
		"size_bytes": 24009071487,
		"size_kb": 23446359,
		"num_objects": 5914,
		"num_object_clones": 0,
		"num_object_copies": 17742,
		"num_objects_missing_on_primary": 0,
		"num_objects_unfound": 0,
		"num_objects_degraded": 0,
		"read_ops": 2206570,
		"read_bytes": 91478306816,
		"write_ops": 207794,
		"write_bytes": 32866043904
	}],
	"total_objects": 17448,
	"total_used": 210351680,
	"total_avail": 3010861504,
	"total_space": 3221213184
}
7.2 使用ceph查看
7.2.1 简洁版
# 简洁版
[root@node210 ~]# ceph df
GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED
    3.0 TiB     2.8 TiB      194 GiB          6.32
POOLS:
    NAME                     ID     USED        %USED     MAX AVAIL     OBJECTS
    cephfs_data              1          0 B         0       908 GiB           0
    cephfs_metadata          2      2.2 KiB         0       908 GiB          22
    mirror-pool              3       24 GiB      2.54       908 GiB        6168
    mirror-cache-pool        4        710 B         0       908 GiB           6
    data-cloud-disk-pool     5       21 GiB      2.21       908 GiB        5338
    root-cloud-disk-pool     6       20 GiB      2.17       908 GiB        5328
7.2.2 详细版
# 详细内容
[root@node210 ~]# ceph df detail
GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED     OBJECTS
    3.0 TiB     2.8 TiB      202 GiB          6.59     17.60 k
POOLS:
    NAME                     ID     QUOTA OBJECTS     QUOTA BYTES     USED        %USED     MAX AVAIL     OBJECTS     DIRTY      READ        WRITE       RAW USED
    cephfs_data              1      N/A               N/A                 0 B         0       905 GiB           0         0          0 B         0 B          0 B
    cephfs_metadata          2      N/A               N/A             2.2 KiB         0       905 GiB          22        22         49 B        46 B      6.7 KiB
    mirror-pool              3      N/A               N/A              24 GiB      2.55       905 GiB        6168     6.17 k     1.1 MiB      43 KiB       71 GiB
    mirror-cache-pool        4      N/A               N/A               710 B         0       905 GiB           6         6        383 B        59 B      2.1 KiB
    data-cloud-disk-pool     5      N/A               N/A              21 GiB      2.22       905 GiB        5338     5.34 k      12 MiB      20 KiB       62 GiB
    root-cloud-disk-pool     6      N/A               N/A              23 GiB      2.47       905 GiB        6066     6.07 k     2.1 MiB     198 KiB       69 GiB
7.2.3 输出json格式
# 输出json格式
[root@node210 ~]# ceph df detail -f json
{
	"stats": {
		"total_bytes": 3298522300416,			# ceph集群总空间大小(bytes)
		"total_used_bytes": 208471064576,		# ceph集群已使用空间大小(bytes)
		"total_avail_bytes": 3090051235840,		# ceph集群可用空间大小(bytes)
		"total_objects": 16862					# ceph集群已用对象数量
	},
	"pools": [{
		"name": "cephfs_data",
		"id": 1,
		"stats": {
			"kb_used": 0,						# 存储池使用空间大小(kb类型)
			"bytes_used": 0,					# 存储池使用空间大小(bytes类型)
			"percent_used": 0.000000,			# 存储池使用率
			"max_avail": 975041658880,			# 存储池可用空间大小(bytes类型)
			"objects": 0,
			"quota_objects": 0,
			"quota_bytes": 0,
			"dirty": 0,
			"rd": 0,
			"rd_bytes": 0,
			"wr": 0,
			"wr_bytes": 0,
			"raw_bytes_used": 0
		}
	}, {
		"name": "cephfs_metadata",
		"id": 2,
		"stats": {
			"kb_used": 3,
			"bytes_used": 2286,
			"percent_used": 0.000000,
			"max_avail": 975041658880,
			"objects": 22,
			"quota_objects": 0,
			"quota_bytes": 0,
			"dirty": 22,
			"rd": 49,
			"rd_bytes": 52224,
			"wr": 46,
			"wr_bytes": 13312,
			"raw_bytes_used": 6858
		}
	}, {
		"name": "mirror-pool",
		"id": 3,
		"stats": {
			"kb_used": 24831838,
			"bytes_used": 25427801544,
			"percent_used": 0.025416,
			"max_avail": 975041658880,
			"objects": 6168,
			"quota_objects": 0,
			"quota_bytes": 0,
			"dirty": 6168,
			"rd": 1138771,
			"rd_bytes": 39611227136,
			"wr": 43774,
			"wr_bytes": 29125074944,
			"raw_bytes_used": 76283404288
		}
	}, {
		"name": "mirror-cache-pool",
		"id": 4,
		"stats": {
			"kb_used": 1,
			"bytes_used": 710,
			"percent_used": 0.000000,
			"max_avail": 975041658880,
			"objects": 6,
			"quota_objects": 0,
			"quota_bytes": 0,
			"dirty": 6,
			"rd": 383,
			"rd_bytes": 308224,
			"wr": 59,
			"wr_bytes": 27648,
			"raw_bytes_used": 2130
		}
	}, {
		"name": "data-cloud-disk-pool",
		"id": 5,
		"stats": {
			"kb_used": 21528840,
			"bytes_used": 22045532128,
			"percent_used": 0.022110,
			"max_avail": 975041658880,
			"objects": 5338,
			"quota_objects": 0,
			"quota_bytes": 0,
			"dirty": 5338,
			"rd": 12190102,
			"rd_bytes": 313548800,
			"wr": 20723,
			"wr_bytes": 22069889024,
			"raw_bytes_used": 66136596480
		}
	}, {
		"name": "root-cloud-disk-pool",
		"id": 6,
		"stats": {
			"kb_used": 21103461,
			"bytes_used": 21609943436,
			"percent_used": 0.021683,
			"max_avail": 975041658880,
			"objects": 5328,
			"quota_objects": 0,
			"quota_bytes": 0,
			"dirty": 5328,
			"rd": 2135650,
			"rd_bytes": 90925503488,
			"wr": 175166,
			"wr_bytes": 28356439040,
			"raw_bytes_used": 64829829120
		}
	}]
}

8. 其他常用命令

# 获取ceph集群fsid值
[root@node210 ~]# ceph fsid
7c9c2ba8-dcec-42b0-8231-a2149988b913

# 获取密钥keyring(client.admin即为ceph集群用户)
[root@node210 ~]# ceph auth get-key client.admin
AQCf2KZhhnA6MBAAdMS0rgH2VpMIooo7jwhDuw==

[root@node210 ~]# ceph auth print-key client.admin
AQCf2KZhhnA6MBAAdMS0rgH2VpMIooo7jwhDuw==

9. QEMU结合使用

备注:

# 1. 使用qemu-img操作rbd块设备的时候,必须指定rbd
# 2. 若是在客户端进行操作,且配置文件不在默认路径下,还得指定配置文件位置(id也可以加进去)
# 实例
qemu-img info rbd:rbd_pool/foo:id=admin:conf=/etc/ceph/ceph.conf
9.1 创建镜像
# 语法
# 从QEMU创建块设备映像,必须指定rbd,池名称和要创建的映像的名称,还必须指定镜像的大小。
# 若是在客户端进行操作,且配置文件不在默认路径下,还得指定配置文件位置(id也可以加进去)
# ceph中镜像格式只有1和2(默认格式),但是使用qemu-img创建镜像的时候,此处需指定格式为raw
qemu-img create -f raw rbd:<pool-name>/<image-name> <size>
# 例子
qemu-img create -f raw rbd:rbd_pool/foo 10G
# 指定配置文件(id为ceph集群用户,conf指该集群的配置文件)
qemu-img create -f raw rbd:rbd_pool/foo:id=admin:conf=/etc/ceph/ceph.conf 10G
9.2 查看镜像信息
# 语法。
# 若是在客户端进行操作,且配置文件不在默认路径下,还得指定配置文件位置(id也可以加进去)
qemu-img info rbd:<pool-name>/<image-name>
# 例子
qemu-img info rbd:rbd_pool/foo
# 指定配置文件(id为ceph集群用户,conf指该集群的配置文件)
[root@node210 ~]# qemu-img info rbd:rbd_pool/foo:id=admin:conf=/etc/ceph/ceph.conf
image: json:{"driver": "raw", "file": {"pool": "rbd_pool", "image": "foo", "conf": "/etc/ceph/ceph.conf", "driver": "rbd", "user": "admin"}}
file format: raw
virtual size: 200M (209715200 bytes)
disk size: unavailable
cluster_size: 4194304
9.3 镜像扩容
# 语法。
# 若是在客户端进行操作,且配置文件不在默认路径下,还得指定配置文件位置(id也可以加进去)
# ceph类型块设备不指定格式raw会有提示信息
qemu-img resize -f raw rbd:<pool-name>/<image-name> <size>
# 例子(扩容,原始空间小于3G时扩容至3G,大于3G时则减少至3G)
qemu-img resize -f raw rbd:rbd_pool/foo 3G
# 指定配置文件(id为ceph集群用户,conf指该集群的配置文件)
qemu-img resize rbd:rbd_pool/foo:id=admin:conf=/etc/ceph/ceph.conf 5G
# 例子(扩容,在原始基础上扩容3G)
qemu-img resize -f raw rbd:rbd_pool/foo +3G
# 例子(减容,在原始基础上减容1G,此时应加上--shrink,否则会有对应提示)
qemu-img resize -f raw --shrink rbd:rbd_pool/foo -1G
9.4 镜像格式转换
# 语法。
# 若是在客户端进行操作,且配置文件不在默认路径下,还得指定配置文件位置(id也可以加进去)
# 参数说明:
#	-f:源镜像的格式,它会自动检测,所以省略之
# 	-O:目标镜像的格式,由文件系统转至ceph块设备,或者由ceph快设备转至ceph块设备此处填写 rbd 
# 	<fname>与<out_fname>分别为源文件和转化后的文件
qemu-img convert -f {qcow2} -O <rbd> <fname> <out_fname>

# 例子(文件系统转至ceph块设备)
qemu-img convert -f qcow2 -O rbd /mnt/00000 rbd:rbd_pool/foo:id=admin:conf=/etc/ceph/ceph.conf
# 例子(ceph块设备转至文件系统)
qemu-img convert -f rbd -O qcow2 rbd:rbd_pool/foo:id=admin:conf=/etc/ceph/ceph.conf /mnt/00000 
# 例子(ceph块设备转至ceph块设备)
qemu-img convert -f rbd -O rbd rbd:rbd_pool/foo:id=admin:conf=/etc/ceph/ceph.conf rbd:rbd_pool/foo:id=admin2:conf=/etc/ceph2/ceph.conf 

10. libvirt结合使用

# 在libvirt中使用块设备的时候,物理机需要和ceph集群有个对应的关系,需先生成secret-uuid,同时将ceph集群的用户密钥与之对应
10.1 查看secret-uuid及密钥
10.1.1 查看secret-uuid
# 查看secret-uuid
# 说明:
# (1)uuid即为产生的secret-uuid密钥,ceph表示类型,最后为一个name值,课自定义
# (2)注意:这样无法查看secret-uuid与ceph集群密钥是否与之对应,可以查看secret-uuid对应的密钥
[root@node210 ~]# virsh secret-list
 UUID                                  Usage
--------------------------------------------------------------------------------
 2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf  ceph 2f966bf145aa4da88f99a5c10fdc9cdf_admin
 3df39246-789d-4962-8263-c7355d81093a  ceph 3df39246789d49628263c7355d81093a_admin
 5af7910b-8669-49fe-8116-11c3181ab2a8  ceph 5af7910b866949fe811611c3181ab2a8_admin
 a8917da7-7045-449d-b522-d82017909afb  ceph a8917da77045449db522d82017909afb_admin
10.1.2 查看ceph密钥
# 查看secret-uuid与ceph集群对应的密钥
[root@node210 ~]# virsh secret-get-value 2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf
AQCf2KZhhnA6MBAAdMS0rgH2VpMIooo7jwhDuw==
# 注意:
# (1)若是secret-uuid还未与ceph集群密钥对应,则无法查看内容
[root@node210 ~]# virsh secret-get-value 2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf
error: Secret not found: secret '2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf' does not have a value
# (2)若是创建创建secret-uuid的时候,定义为私有的(即设置了private属性)也无法查看密钥
[root@node210 ~]# virsh secret-get-value 1f966bf1-45aa-4da8-8f99-a5c10fdc9cdf
error: Invalid secret: secret is private
10.2 创建secret-uuid
10.2.1 创建定义secret-uuid的xml文件
# 1.创建定义secret-uuid的xml文件
# 说明:
# (1)uuid标签内容即为secret-uuid值,可以自定义,亦可以不写,不写的话系统自动生成
# (2)type此处表名类型
# (3)name标签内容可以自定义
# (4)若private设置为yes,则通过virsh secret-get-value 2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf查看secret-uuid与之对应的ceph密钥时,以无法查看

[root@node210 ~]# vim 2f966bf145aa4da88f99a5c10fdc9cdf.xml
<secret ephemeral='no' private='yes'>
  <uuid>2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf</uuid>
  <usage type='ceph'>
    <name>2f966bf145aa4da88f99a5c10fdc9cdf_admin</name>
  </usage>
</secret>
10.2.2 定义secret-uuid
# 2.定义secret-uuid
# 执行命令
# 此时会在/etc/libvirt/secrets/目录下生成2f966bf145aa4da88f99a5c10fdc9cdf.xml文件
[root@node210 ~]# virsh secret-define --file 2f966bf145aa4da88f99a5c10fdc9cdf.xml
Secret 2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf created
10.2.3 绑定secret-uuid与ceph密钥
# 3.将secret-uuid与ceph集群密钥绑定
# 执行命令
# 此时会在/etc/libvirt/secrets/目录下生成2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf.base64文件
[root@node210 ~]# virsh secret-set-value --secret 2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf --base64 AQCf2KZhhnA6MBAAdMS0rgH2VpMIooo7jwhDuw==
Secret value set
10.2.4 查看密钥
# 4.查看密钥(secret-uuid还未与ceph集群密钥对应,则无法查看内容,private设置为yes时也无法查看)
[root@node210 ~]# virsh secret-get-value 2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf
AQCf2KZhhnA6MBAAdMS0rgH2VpMIooo7jwhDuw==
10.2.5 删除secret-uuid
# 执行命令
# 此时会在/etc/libvirt/secrets/目录下的2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf.base64文件和2f966bf145aa4da88f99a5c10fdc9cdf.xml文件会被清除
[root@node210 ~]# virsh secret-undefine 2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf
Secret 2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf deleted
10.3 云主机中使用块设备
10.3.1 定义云主机xml文件
# ceph块设备在云主机中作为系统盘(数据盘)和光驱使用,此处主要描述此内容
# (1) 作为系统盘(数据盘)时,disk标签中type为network,device为disk;而作为光驱的时候,disk标签中type为network,device为cdrom
# (2) driver标签中的type值为raw
# (3) source标签中的protocol值为rbd协议,name值即为ceph集群中的存储池/镜像名称
# (4) host标签中的name值即为ceph集群中ceph集群服务IP,port则为ceph集群服务中mon服务的端口
# (5) auth标签中的username值即为ceph集群用户的名称,type置为ceph,uuid即为secret-uuid值

# 注意:
# (1) rbd_pool/8e2ab33096ff40b7b24c8300455a369若不存在则定义虚拟机会报错
# (2) auth中的uuid若没有与ceph集群的密钥绑定,则定义虚拟机会报错
# (3) ceph集群的IP端口不需要全部写出来,但是需确保ceph集群正常,否则定义虚拟机会报错

# 系统盘(数据盘)
<disk type='network' device='disk'>
  <driver name='qemu' type='raw' cache='none'/>
  <source protocol='rbd' name='rbd_pool/8e2ab33096ff40b7b24c8300455a369a'>
    <host name='192.168.20.20' port='6789'/>
    <host name='192.168.20.21' port='6789'/>
    <auth username='admin'>
      <secret type='ceph' uuid='2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf'/>
    </auth>
  </source>
  <target dev='vda' bus='virtio'/>
</disk>

# 光驱
<disk type='network' device='cdrom'>
  <driver name='qemu' type='raw'/>
  <source protocol='rbd' name='rbd_pool/8e2ab33096ff40b7b24c8300455a369a'>
    <host name='192.168.20.20' port='6789'/>
    <host name='192.168.20.21' port='6789'/>
    <auth username='admin'>
      <secret type='ceph' uuid='2f966bf1-45aa-4da8-8f99-a5c10fdc9cdf'/>
    </auth>
  </source>
  <target dev='hda' bus='ide'/>
  <readonly/>
</disk>
10.3.2 定义云主机
[root@node210 ~]# virsh define --file xxx.xml
10.3.2 启动云主机
[root@node210 ~]# virsh start xxx
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值