CEPH集群重启后ceph osd status和ceph -s命令查看发现节点服务器osd服务不成功的解决方案
文章目录
前言
- 实验环境部署ceph集群(结合openstack多节点),重启之后ceph集群的osd服务出现问题,解决如下
- ceph+openstack多节点部署,有兴趣的可以查看我另一篇博客:https://blog.youkuaiyun.com/CN_TangZheng/article/details/104745364
一:报错
[root@ct ~]# ceph -s '//查看ceph集群状态'
cluster:
id: 8c9d2d27-492b-48a4-beb6-7de453cf45d6
health: HEALTH_WARN '//健康检查为warn'
1 osds down
1 host (1 osds) down
Reduced data availability: 192 pgs inactive
Degraded data redundancy: 812/1218 objects degraded (66.667%), 116 pgs degraded, 192 pgs undersized
clock skew detected on mon.c1, mon.c2
services:
mon: 3 daemons, quorum ct,c1,c2
mgr: ct(active), standbys: c1, c2
osd: 3 osds: 1 up, 2 in '//二个OSD服务宕了'
data:
pools: 3 pools, 192 pgs
objects: 406 objects, 1.8 GiB
usage: 2.8 GiB used, 1021 GiB / 1024 GiB