openstack 管理二十四 - ceph 与 vm 连接测试记录

本文详细记录了在两个全新安装的Novacompute主机上测试Ceph与虚拟机(VM)集成的过程。包括创建虚拟机、云盘的管理(创建、连接、卸载和删除),以及对Ceph存储空间的影响。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

目的

测试 ceph 与 vm 连接与使用

创建 vm

主机 128030 及 129094 是全新安装并利用 puppet 推送的 nova compute 主机

计划在这两个主机上进行 vm 连接 ceph 测试

nova boot --flavor b2c_web_1core --image Centos6.3_1.3 --security_group default --nic net-id=9106aee4-2dc0-4a6d-a789-10c53e2b88c1 ceph-test01.sh.vclound.com --availability-zone nova:sh-compute-128030.sh.vclound.com
+--------------------------------------+------------------------------------------------------+
| Property                             | Value                                                |
+--------------------------------------+------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                               |
| OS-EXT-AZ:availability_zone          | nova                                                 |
| OS-EXT-SRV-ATTR:host                 | -                                                    |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                    |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000020d                                    |
| OS-EXT-STS:power_state               | 0                                                    |
| OS-EXT-STS:task_state                | scheduling                                           |
| OS-EXT-STS:vm_state                  | building                                             |
| OS-SRV-USG:launched_at               | -                                                    |
| OS-SRV-USG:terminated_at             | -                                                    |
| accessIPv4                           |                                                      |
| accessIPv6                           |                                                      |
| adminPass                            | 5DCHoj8ihwN6                                         |
| config_drive                         |                                                      |
| created                              | 2015-06-25T03:49:14Z                                 |
| flavor                               | b2c_web_1core (5)                                    |
| hostId                               |                                                      |
| id                                   | 99d37977-a13a-4a8b-b8b1-e613a4959623                 |
| image                                | Centos6.3_1.3 (7ec6eb66-b8a2-41e9-bbb5-b1e7ce1efed4) |
| key_name                             | -                                                    |
| metadata                             | {}                                                   |
| name                                 | ceph-test01.sh.vclound.com                           |
| os-extended-volumes:volumes_attached | []                                                   |
| progress                             | 0                                                    |
| security_groups                      | default                                              |
| status                               | BUILD                                                |
| tenant_id                            | bb0b51d166254dc99bc7462c0ac002ff                     |
| updated                              | 2015-06-25T03:49:14Z                                 |
| user_id                              | 226e71f1c1aa4bae85485d1d17b6f0ae                     |
+--------------------------------------+------------------------------------------------------+

nova boot --flavor b2c_web_1core --image Centos6.3_1.3 --security_group default --nic net-id=9106aee4-2dc0-4a6d-a789-10c53e2b88c1 ceph-test02.sh.vclound.com --availability-zone nova:sh-compute-129094.sh.vclound.com
+--------------------------------------+------------------------------------------------------+
| Property                             | Value                                                |
+--------------------------------------+------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                               |
| OS-EXT-AZ:availability_zone          | nova                                                 |
| OS-EXT-SRV-ATTR:host                 | -                                                    |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                    |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000020f                                    |
| OS-EXT-STS:power_state               | 0                                                    |
| OS-EXT-STS:task_state                | scheduling                                           |
| OS-EXT-STS:vm_state                  | building                                             |
| OS-SRV-USG:launched_at               | -                                                    |
| OS-SRV-USG:terminated_at             | -                                                    |
| accessIPv4                           |                                                      |
| accessIPv6                           |                                                      |
| adminPass                            | wHddAW33sFBE                                         |
| config_drive                         |                                                      |
| created                              | 2015-06-25T03:51:03Z                                 |
| flavor                               | b2c_web_1core (5)                                    |
| hostId                               |                                                      |
| id                                   | b433b227-14ab-4157-8f08-362ad680e35e                 |
| image                                | Centos6.3_1.3 (7ec6eb66-b8a2-41e9-bbb5-b1e7ce1efed4) |
| key_name                             | -                                                    |
| metadata                             | {}                                                   |
| name                                 | ceph-test02.sh.vclound.com                           |
| os-extended-volumes:volumes_attached | []                                                   |
| progress                             | 0                                                    |
| security_groups                      | default                                              |
| status                               | BUILD                                                |
| tenant_id                            | bb0b51d166254dc99bc7462c0ac002ff                     |
| updated                              | 2015-06-25T03:51:03Z                                 |
| user_id                              | 226e71f1c1aa4bae85485d1d17b6f0ae                     |
+--------------------------------------+------------------------------------------------------+

instance 状态

[root@sh-controller-129022 ~(keystone_admin)]# nova list
+--------------------------------------+-----------------------------+--------+------------+-------------+---------------------------+
| ID                                   | Name                        | Status | Task State | Power State | Networks                  |
+--------------------------------------+-----------------------------+--------+------------+-------------+---------------------------+
| 99d37977-a13a-4a8b-b8b1-e613a4959623 | ceph-test01.sh.vclound.com  | ACTIVE | -          | Running     | SH_DEV_NET=10.198.192.254 |
| b433b227-14ab-4157-8f08-362ad680e35e | ceph-test02.sh.vclound.com  | ACTIVE | -          | Running     | SH_DEV_NET=10.198.192.255 |
+--------------------------------------+-----------------------------+--------+------------+-------------+---------------------------+

创建云盘

[root@sh-controller-129022 ~(keystone_admin)]# cinder create 50
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2015-06-25T06:11:58.840626      |
| display_description |                 None                 |
|     display_name    |                 None                 |
|      encrypted      |                False                 |
|          id         | 8516fb02-b578-4e57-9678-d30d2b0a6734 |
|       metadata      |                  {}                  |
|         size        |                  50                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@sh-controller-129022 ~(keystone_admin)]# cinder create 50
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2015-06-25T06:12:07.151001      |
| display_description |                 None                 |
|     display_name    |                 None                 |
|      encrypted      |                False                 |
|          id         | 9d8aa395-5e6a-411a-9f19-6375f29e9f9f |
|       metadata      |                  {}                  |
|         size        |                  50                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@sh-controller-129022 ~(keystone_admin)]# cinder create 50
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2015-06-25T06:12:14.321030      |
| display_description |                 None                 |
|     display_name    |                 None                 |
|      encrypted      |                False                 |
|          id         | a5751c38-01c0-4f25-a02c-7d2a05d6ea36 |
|       metadata      |                  {}                  |
|         size        |                  50                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+

查询云盘

[root@sh-controller-129022 ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 8516fb02-b578-4e57-9678-d30d2b0a6734 | available |     None     |  50  |     None    |  false   |             |
| 9d8aa395-5e6a-411a-9f19-6375f29e9f9f | available |     None     |  50  |     None    |  false   |             |
| a5751c38-01c0-4f25-a02c-7d2a05d6ea36 | available |     None     |  50  |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

连接云盘

[root@sh-controller-129022 ~(keystone_admin)]# nova volume-attach 99d37977-a13a-4a8b-b8b1-e613a4959623 8516fb02-b578-4e57-9678-d30d2b0a6734
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdc                             |
| id       | 8516fb02-b578-4e57-9678-d30d2b0a6734 |
| serverId | 99d37977-a13a-4a8b-b8b1-e613a4959623 |
| volumeId | 8516fb02-b578-4e57-9678-d30d2b0a6734 |
+----------+--------------------------------------+
[root@sh-controller-129022 ~(keystone_admin)]# nova volume-attach 99d37977-a13a-4a8b-b8b1-e613a4959623 9d8aa395-5e6a-411a-9f19-6375f29e9f9f
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdd                             |
| id       | 9d8aa395-5e6a-411a-9f19-6375f29e9f9f |
| serverId | 99d37977-a13a-4a8b-b8b1-e613a4959623 |
| volumeId | 9d8aa395-5e6a-411a-9f19-6375f29e9f9f |
+----------+--------------------------------------+
[root@sh-controller-129022 ~(keystone_admin)]# nova volume-attach b433b227-14ab-4157-8f08-362ad680e35e a5751c38-01c0-4f25-a02c-7d2a05d6ea36
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdc                             |
| id       | a5751c38-01c0-4f25-a02c-7d2a05d6ea36 |
| serverId | b433b227-14ab-4157-8f08-362ad680e35e |
| volumeId | a5751c38-01c0-4f25-a02c-7d2a05d6ea36 |
+----------+--------------------------------------+

检测云盘

[root@sh-controller-129022 ~(keystone_admin)]# nova show 99d37977-a13a-4a8b-b8b1-e613a4959623
+--------------------------------------+--------------------------------------------------------------------------------------------------+
| Property                             | Value                                                                                            |
+--------------------------------------+--------------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                                                           |
| OS-EXT-AZ:availability_zone          | nova                                                                                             |
| OS-EXT-SRV-ATTR:host                 | sh-compute-128030.sh.vclound.com                                                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | sh-compute-128030.sh.vclound.com                                                                 |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000020d                                                                                |
| OS-EXT-STS:power_state               | 1                                                                                                |
| OS-EXT-STS:task_state                | -                                                                                                |
| OS-EXT-STS:vm_state                  | active                                                                                           |
| OS-SRV-USG:launched_at               | 2015-06-25T03:49:26.000000                                                                       |
| OS-SRV-USG:terminated_at             | -                                                                                                |
| SH_DEV_NET network                   | 10.198.192.254                                                                                   |
| accessIPv4                           |                                                                                                  |
| accessIPv6                           |                                                                                                  |
| config_drive                         |                                                                                                  |
| created                              | 2015-06-25T03:49:14Z                                                                             |
| flavor                               | b2c_web_1core (5)                                                                                |
| hostId                               | 8b5b75df8b0271d739323f1373b7363d432bb9c68b079ab3e94e1c1a                                         |
| id                                   | 99d37977-a13a-4a8b-b8b1-e613a4959623                                                             |
| image                                | Centos6.3_1.3 (7ec6eb66-b8a2-41e9-bbb5-b1e7ce1efed4)                                             |
| key_name                             | -                                                                                                |
| metadata                             | {}                                                                                               |
| name                                 | ceph-test01.sh.vclound.com                                                                       |
| os-extended-volumes:volumes_attached | [{"id": "8516fb02-b578-4e57-9678-d30d2b0a6734"}, {"id": "9d8aa395-5e6a-411a-9f19-6375f29e9f9f"}] | <-挂载两个云盘
| progress                             | 0                                                                                                |
| security_groups                      | default                                                                                          |
| status                               | ACTIVE                                                                                           |
| tenant_id                            | bb0b51d166254dc99bc7462c0ac002ff                                                                 |
| updated                              | 2015-06-25T03:49:26Z                                                                             |
| user_id                              | 226e71f1c1aa4bae85485d1d17b6f0ae                                                                 |
+--------------------------------------+--------------------------------------------------------------------------------------------------+
[root@sh-controller-129022 ~(keystone_admin)]# nova show b433b227-14ab-4157-8f08-362ad680e35e
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-SRV-ATTR:host                 | sh-compute-129094.sh.vclound.com                         |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | sh-compute-129094.sh.vclound.com                         |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000020f                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2015-06-25T03:52:05.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| SH_DEV_NET network                   | 10.198.192.255                                           |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2015-06-25T03:51:03Z                                     |
| flavor                               | b2c_web_1core (5)                                        |
| hostId                               | a5239c63509fa00ab056ca701363538ecc0afe41d8f886f82b345b4d | <- 挂载一个
| id                                   | b433b227-14ab-4157-8f08-362ad680e35e                     |
| image                                | Centos6.3_1.3 (7ec6eb66-b8a2-41e9-bbb5-b1e7ce1efed4)     |
| key_name                             | -                                                        |
| metadata                             | {}                                                       |
| name                                 | ceph-test02.sh.vclound.com                               |
| os-extended-volumes:volumes_attached | [{"id": "a5751c38-01c0-4f25-a02c-7d2a05d6ea36"}]         |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | ACTIVE                                                   |
| tenant_id                            | bb0b51d166254dc99bc7462c0ac002ff                         |
| updated                              | 2015-06-25T03:52:05Z                                     |
| user_id                              | 226e71f1c1aa4bae85485d1d17b6f0ae                         |
+--------------------------------------+----------------------------------------------------------+

检测 cinder 状态

[root@sh-controller-129022 ~(keystone_admin)]# cinder list
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| 8516fb02-b578-4e57-9678-d30d2b0a6734 | in-use |     None     |  50  |     None    |  false   | 99d37977-a13a-4a8b-b8b1-e613a4959623 |
| 9d8aa395-5e6a-411a-9f19-6375f29e9f9f | in-use |     None     |  50  |     None    |  false   | 99d37977-a13a-4a8b-b8b1-e613a4959623 |
| a5751c38-01c0-4f25-a02c-7d2a05d6ea36 | in-use |     None     |  50  |     None    |  false   | b433b227-14ab-4157-8f08-362ad680e35e |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+

测试

测试云盘读写

[root@ceph-test01 ~]# pvcreate  /dev/vdc /dev/vdd
  Physical volume "/dev/vdc" successfully created
  Physical volume "/dev/vdd" successfully created
  
[root@ceph-test01 ~]# vgcreate myvg /dev/vdc /dev/vdd
  Volume group "myvg" successfully created
  
[root@ceph-test01 ~]# lvcreate  -i 2 -n mylv  -l 100%FREE  myvg
  Using default stripesize 64.00 KiB
  Logical volume "mylv" created
  
[root@ceph-test01 ~]# yum install -y xfsprogs.x86_64 > /dev/null 2>&1

[root@ceph-test01 ~]# mkfs.xfs /dev/myvg/mylv
meta-data=/dev/myvg/mylv         isize=256    agcount=16, agsize=1638256 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=26212096, imaxpct=25
         =                       sunit=16     swidth=32 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=12800, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@ceph-test01 ~]# mount /dev/myvg/mylv /mnt

[root@ceph-test01 ~]# df -h
文件系统              容量  已用  可用 已用%% 挂载点
/dev/vda1              20G  1.1G   18G   6% /
tmpfs                 939M     0  939M   0% /dev/shm
/dev/mapper/myvg-mylv
                      100G   33M  100G   1% /mnt

查询当前 ceph 情况

[root@sh-ceph-128213 ~]# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED
    290T      290T        4615M             0
POOLS:
    NAME        ID     USED       %USED     MAX AVAIL     OBJECTS
    rbd         0           0         0        99157G           0
    volumes     1      37220k         0        99157G          28   <- 留意当前默认下 ceph 只有 28 个对象

在 vm 上执行 操作

[root@ceph-test01 ~]# dd if=/dev/zero of=/mnt/1.img bs=1M count=700000
dd: 正在写入"/mnt/1.img": 设备上没有空间
记录了102309+0 的读入
记录了102308+0 的写出
107278180352字节(107 GB)已复制,982.231 秒,109 MB/秒

监控 ceph 存储空间

[root@sh-ceph-128212 var]# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED
    290T      290T         337G          0.11
POOLS:
    NAME        ID     USED        %USED     MAX AVAIL     OBJECTS
    rbd         0            0         0        99033G           0
    volumes     1      102399M      0.03        99033G       25622    <-  dd 在客户端有 1 个文件,  但在 ceph 中会出现 [25622-28] 个文件 2.5 万个小文件

监控 ceph 物理存储空间

[root@sh-ceph-128212 ceph-0]# df -h
文件系统                 容量  已用  可用 已用% 挂载点
/dev/mapper/centos-root   50G  1.8G   49G    4% /
devtmpfs                  32G     0   32G    0% /dev
tmpfs                     32G     0   32G    0% /dev/shm
tmpfs                     32G   18M   32G    1% /run
tmpfs                     32G     0   32G    0% /sys/fs/cgroup
/dev/sda2                494M  123M  372M   25% /boot
/dev/mapper/centos-home  3.6T   33M  3.6T    1% /home
/dev/sdb1                3.7T  3.8G  3.7T    1% /var/lib/ceph/osd/ceph-0   
/dev/sdc1                3.7T  4.2G  3.7T    1% /var/lib/ceph/osd/ceph-1
/dev/sdd1                3.7T  4.3G  3.7T    1% /var/lib/ceph/osd/ceph-2
/dev/sdf1                3.7T  4.2G  3.7T    1% /var/lib/ceph/osd/ceph-3
/dev/sdg1                3.7T  4.1G  3.7T    1% /var/lib/ceph/osd/ceph-4
/dev/sdh1                3.7T  4.2G  3.7T    1% /var/lib/ceph/osd/ceph-5
/dev/sde1                3.7T  4.2G  3.7T    1% /var/lib/ceph/osd/ceph-6
/dev/sdi1                3.7T  3.7G  3.7T    1% /var/lib/ceph/osd/ceph-7
/dev/sdj1                3.7T  3.9G  3.7T    1% /var/lib/ceph/osd/ceph-8  
/dev/sdk1                3.7T  4.6G  3.7T    1% /var/lib/ceph/osd/ceph-9   <- 参考已用空间

发现之前 dd 的文件是被打散, 分布在各个不同的磁盘中
分布数量也不一定是一样
没有在 /var/lib/ceph/osd/ceph* 目录中找到一个具体的文件名
只看见一堆被打散的小文件

卸载云盘

[root@ceph-test01 ~]# rm -rf /mnt/1.img
[root@ceph-test01 ~]# umount /mnt

检测 ceph 空间

[root@sh-controller-128022 cinder]# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED
    290T      290T         304G          0.10
POOLS:
    NAME        ID     USED        %USED     MAX AVAIL     OBJECTS
    rbd         0            0         0        99044G           0
    volumes     1      102399M      0.03        99044G       25622   <- 删除 1.img 文件 不会直接影响 ceph 磁盘容量

卸载 openstack 中云盘

[root@sh-controller-129022 ~(keystone_admin)]# nova volume-detach 99d37977-a13a-4a8b-b8b1-e613a4959623 8516fb02-b578-4e57-9678-d30d2b0a6734
[root@sh-controller-129022 ~(keystone_admin)]# nova volume-detach 99d37977-a13a-4a8b-b8b1-e613a4959623 9d8aa395-5e6a-411a-9f19-6375f29e9f9f

删除 openstack 云盘

[root@sh-controller-129022 ~(keystone_admin)]# cinder delete 8516fb02-b578-4e57-9678-d30d2b0a6734
[root@sh-controller-129022 ~(keystone_admin)]# cinder delete 9d8aa395-5e6a-411a-9f19-6375f29e9f9f

查询 ceph 存储空间

[root@sh-controller-128022 cinder]# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED
    290T      290T       15961M             0
POOLS:
    NAME        ID     USED       %USED     MAX AVAIL     OBJECTS
    rbd         0           0         0        99131G           0
    volumes     1      37220k         0        99131G          24   <- 之前产生的 1img 的对象被删除了
    
[root@sh-ceph-128212 ~]# df -h
文件系统                 容量  已用  可用 已用% 挂载点
/dev/mapper/centos-root   50G  1.8G   49G    4% /
devtmpfs                  32G     0   32G    0% /dev
tmpfs                     32G     0   32G    0% /dev/shm
tmpfs                     32G   18M   32G    1% /run
tmpfs                     32G     0   32G    0% /sys/fs/cgroup
/dev/sda2                494M  123M  372M   25% /boot
/dev/mapper/centos-home  3.6T   33M  3.6T    1% /home
/dev/sdb1                3.7T   58M  3.7T    1% /var/lib/ceph/osd/ceph-0    <-参考, 磁盘也恢复了之前大小
/dev/sdc1                3.7T   62M  3.7T    1% /var/lib/ceph/osd/ceph-1
/dev/sdd1                3.7T   55M  3.7T    1% /var/lib/ceph/osd/ceph-2
/dev/sdf1                3.7T   61M  3.7T    1% /var/lib/ceph/osd/ceph-3
/dev/sdg1                3.7T   63M  3.7T    1% /var/lib/ceph/osd/ceph-4
/dev/sdh1                3.7T   56M  3.7T    1% /var/lib/ceph/osd/ceph-5
/dev/sde1                3.7T   56M  3.7T    1% /var/lib/ceph/osd/ceph-6
/dev/sdi1                3.7T   63M  3.7T    1% /var/lib/ceph/osd/ceph-7
/dev/sdj1                3.7T   59M  3.7T    1% /var/lib/ceph/osd/ceph-8
/dev/sdk1                3.7T   60M  3.7T    1% /var/lib/ceph/osd/ceph-9

证明, 删除云盘能够会自动释放空间

<think>我们正在讨论CephOpenStack在Linux环境下的集成原理,以及涉及的三个关键组件的工作机制。根据引用内容,我们知道:-OpenStack是一个开源的云计算管理平台项目,包含多个组件(如Nova、Glance、Cinder等)来管理计算、网络、存储等资源。-Ceph是一个分布式存储系统,提供对象存储、块存储和文件系统存储。-在集成中,Ceph主要为OpenStack提供块存储(通过Cinder)和镜像存储(通过Glance)等服务。关键组件工作机制:1.**CephOSD(ObjectStorageDaemon)**:负责存储数据,处理数据复制、恢复、再平衡,并通过检查其他OSD守护进程的心跳来向CephMonitor提供监控信息。2.**CephMonitor(MON)**:维护集群状态的映射(如monitormap、managermap、OSDmap等),确保集群状态一致。OpenStack组件需要Monitor通信来获取集群状态。3.**CephManager(MGR)**:提供额外的监控和接口,以支持外部监控和管理系统。集成原理:OpenStackCeph集成主要通过以下方式:-**Glance(镜像服务)**:将虚拟机镜像存储在Ceph的RADOS(可靠自主分布式对象存储)中。这样,当创建虚拟机时,Nova可以直接从Ceph中读取镜像,并且多个计算节点可以共享镜像。-**Cinder(块存储服务)**:使用Ceph提供块存储卷。Cinder将卷作为对象存储在Ceph集群中,并通过libvirtQEMU交互,将卷载给虚拟机。-**Nova(计算服务)**:当使用Ceph作为后端存储时,Nova可以将虚拟机的磁盘文件存储在Ceph中(即使用CephRBD卷),这样虚拟机可以在不同节点间迁移而无需移动磁盘文件。具体工作机制:1.配置Ceph集群,并创建相应的存储池(如glance-pool,cinder-pool,vms-pool)。2.在OpenStack各节点上安装Ceph客户端,并配置相应的密钥(如引用[4]中提到的将ceph.conf和keyring文件复制到OpenStack节点)。3.配置OpenStack各组件使用Ceph-Glance:配置`glance-api.conf`,指定存储后端为Ceph,并配置RBD相关信息。-Cinder:配置`cinder.conf`,指定卷驱动为Ceph,并配置RBD相关信息。-Nova:配置`nova.conf`,指定虚拟机磁盘格式为RBD,并配置连接Ceph集群的信息。4.当用户通过OpenStackDashboard或API创建云主机时:-Glance从Ceph中读取镜像。-Cinder在Ceph中创建卷并载。-Nova在Ceph中创建虚拟机的磁盘(通常是一个RBD卷),然后启动虚拟机。引用[2]提到了使用Kolla容器化部署OpenStackCeph集群集成,这是一种常见的部署方式。注意:引用[1]中的关系图和组件间关系图以及新建云主机流程图可以帮助理解,但在此文本中未提供。因此,总结如下:-Ceph通过其分布式存储能力为OpenStack提供可靠、可扩展的存储后端。-三个关键组件(OSD、Monitor、Manager)共同工作以维护Ceph集群的健康和状态,而OpenStack通过客户端库(如librbd)Ceph交互。下面将按照用户要求,在回答中使用正确的LaTeX格式,并生成相关问题。</think>### CephOpenStack在Linux环境下的集成原理及关键组件工作机制 #### 一、集成原理 CephOpenStack通过存储驱动实现深度集成,主要服务于OpenStack的三大核心存储需求: 1. **镜像存储**(Glance):将虚拟机镜像存储在Ceph RBD(RADOS Block Device)中 2. **块存储**(Cinder):通过Ceph RBD提供云硬盘服务 3. **虚拟机磁盘**(Nova):直接将虚拟机系统盘置于Ceph集群 集成基础流程: 1. 在OpenStack节点安装Ceph客户端工具(`python-rbd`, `ceph-common`) 2. 将Ceph配置文件`ceph.conf`和管理员密钥`ceph.client.admin.keyring`分发到所有OpenStack节点(如引用[4]所示) 3. 在Ceph集群创建专用存储池(如`vms`, `volumes`, `images`) 4. 修改OpenStack组件配置文件指向Ceph后端 #### 二、三个关键Ceph组件的工作机制 1. **OSD(对象存储守护进程)** - **功能**:实际存储数据的守护进程,每个OSD管理一块物理磁盘 - **工作机制**: - 数据以对象形式存储,默认大小4MB - 通过CRUSH算法实现数据自动分布和负载均衡 - 执行数据复制/纠删码、恢复和再平衡操作 - **高可用设计**:引用[1]指出需部署多个OSD守护进程以实现冗余 2. **Monitor(MON)** - **功能**:维护集群全局状态(Cluster Map) - **工作机制**: - 通过Paxos算法保证多个MON节点间状态一致 - 记录OSD Map、PG Map、CRUSH Map等关键元数据 - OpenStack组件通过MON获取集群拓扑信息 3. **Manager(MGR)** - **功能**:提供集群监控和管理接口 - **工作机制**: - 收集实时性能指标(如IOPS、带宽、延迟) - 通过Dashboard模块提供Web监控界面 - 集成Prometheus输出监控数据 #### 三、组件协同工作流程(以创建云主机为例) ```mermaid graph LR A[OpenStack Nova] -->|1. 请求启动VM| B[Ceph Monitor] B -->|2. 返回集群状态| A A -->|3. 写入磁盘数据| C[Ceph OSD] D[OpenStack Glance] -->|4. 读取镜像| C E[OpenStack Cinder] -->|5. 载云硬盘| C F[Ceph Manager] -->|6. 实时监控| C ``` #### 四、集成优势 1. **数据一致性**:通过RADOS的强一致性模型避免数据损坏 2. **无缝扩展**:添加OSD即可线性提升存储容量和性能 3. **高可用性**:自动故障检测和数据重建(如OSD故障时) 4. **跨组件共享**:同一份镜像可被多个虚拟机直接使用(Copy-On-Write克隆) > 引用[2]证实了容器化部署(Kolla)中CephOpenStack集成的可行性,引用[3]则说明离线部署方案同样支持该集成模式[^2][^3]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Terry_Tsang

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值