ceph命令总结
一、集群
1、启动一个ceph 进程
2、查看机器的监控状态[查看集群健康状态细节]
ceph health [detail]
3、查看ceph的实时运行状态(常用)
ceph -w
4、检查信息状态信息(常用)
ceph -s
5、查看ceph存储空间
ceph df
6、查看ceph集群中的认证用户及相关的key(常用)
ceph auth list
7、查看ceph log日志所在的目录
ceph-conf --name mon.node1 --show-config-value log_file
二、mon
1、查看mon的状态信息
#ceph mon stat
输出信息:
e2: 3 mons at {
10.186.9.13=10.186.9.13:6789/0,10.186.9.14=10.186.9.14:6789/0,10.186.9.16=10.186.9.16:6789/0}, election epoch 46, leader 0 10.186.9.13, quorum 0,1,2 10.186.9.13,10.186.9.14,10.186.9.16
2、查看mon的选举状态
# ceph quorum_status
输出信息:
{
"election_epoch":46,"quorum":[0,1,2],"quorum_names":["10.186.9.13","10.186.9.14","10.186.9.16"],"quorum_leader_name":"10.186.9.13","monmap":{
"epoch":2,"fsid":"e6e8accb-6f7d-460e-93fe-06d49b1fde83","modified":"2019-11-19 16:36:44.137334","created":"2019-11-19 16:36:30.935660","features":{
"persistent":["kraken","luminous"],"optional":[]},"mons":[{
"rank":0,"name":"10.186.9.13","addr":"10.186.9.13:6789/0","public_addr":"10.186.9.13:6789/0"},{
"rank":1,"name":"10.186.9.14","addr":"10.186.9.14:6789/0","public_addr":"10.186.9.14:6789/0"},{
"rank":2,"name":"10.186.9.16","addr":"10.186.9.16:6789/0","public_addr":"10.186.9.16:6789/0"}]}}
3、查看mon的映射信息
# ceph mon dump
输出信息:
dumped monmap epoch 1
epoch 1
fsid c71f378b-e86b-4c49-8b54-a9d85a6122c2
last_changed 2020-08-03 16:50:50.647195
created 2020-08-03 16:50:50.647195
0: 10.5.29.54:6789/0 mon.10.5.29.54
4、删除一个mon节点
# ceph mon remove node1
5、获得一个正在运行的mon map,为二进制mon.bin
# ceph mon getmap -o mon.bin
三、msd
1、查看msd状态
# ceph mds stat
输出结果:
cephfs-1/1/1 up {
0=openstack1=up:active}
2、查看msd的映射信息
ceph mds dump
输出结果:
dumped fsmap epoch 45552
fs_name cephfs
epoch 45552
flags c
created 2020-09-08 18:03:49.514279
modified 2020-09-25 09:25:56.099961
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 210
compat compat={
},rocompat={
},incompat={
1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2}
max_mds 1
in 0
up {
0=2084098}
failed
damaged
stopped
data_pools [7]
metadata_pool 8
inline_data disabled
balancer
standby_count_wanted 0
2084098: 10.5.29.54:6800/1252110594 'openstack1' mds.0.45540 up:active seq 23
3、删除一个mds节点
# ceph mds rm 0 mds.node1
四、osd
1、查看ceph osd运行状态
# ceph osd stat
输出信息:
1 osds: 1 up, 1 in
2、查看osd映射信息
# ceph osd dump
输出信息:
epoch 218
fsid c71f378b-e86b-4c49-8b54-a9d85a6122c2
created 2020-08-03 16:50:50.839507
modified 2020-09-30 13:18:29.712934
flags sortbitwise,recovery_deletes,purged_snapdirs
crush_version 7
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release luminous
pool 1 'images' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 16 pgp_num 16 last_change 118 lfor 0/116 flags hashpspool stripe_width 0 application rbd
removed_snaps [1~3,8~2]
pool 2 'volumes' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 16 pgp_num 16 last_change 218 lfor 0/119 flags hashpspool stripe_width 0 application rbd
removed_snaps [1~3]
pool 3 'backups' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 16 pgp_num 16 last_change 200 lfor 0/125 flags hashpspool stripe_width 0 application rbd
pool 4 'vms' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 16 pgp_num 16 last_change 124 lfor 0/122 flags hashpspool stripe_width 0 application rbd
removed_snaps [1~3]
pool 5 'k8s' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 60 flags hashpspool stripe_width 0
pool 6 '.rgw.root' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 132 flags hashpspool stripe_width 0 application rgw
pool 7 'cephfs_data' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 173 flags hashpspool stripe_width 0 application cephfs
pool 8 'cephfs_metadata' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 174 flags hashpspool stripe_width 0 application cephfs
pool 9 'default.rgw.control' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 151 flags hashpspool stripe_width 0 application rgw
pool 10 'default.rgw.meta' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 156 flags hashpspool stripe_width 0 application rgw
pool 11 'default.rgw.log' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 160 flags hashpspool stripe_width 0 application rgw
max_osd 1
osd.0 up in weight 1 up_from 212 up_thru 212 down_at 211 last_clean_interval [207,209) 10.5.29.54:6802/5668 10.5.29.54:6803/5668 10.5.29.54:6804/5668 10.5.29.54:6805/5668 exists,up 9fd52adc-c323-42a0-b130-a779bbdaf0b4
3、查看osd的目录树
# ceph osd tree
输出信息:
输出信息:
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.64000 root default
-2 0 host 10.5.29.54
-5 0.64000 host openstack1
0 hdd 0.64000 osd.0 up 1.00000 1.00000
4、查看osd各硬盘使用率
# ceph osd df
输出信息:
输出信息:
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
0 hdd 0.64000 1.00000 600GiB 11.0GiB 589GiB 1.83 1.00 120
TOTAL 600GiB 11.0GiB 589GiB 1.83
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
5、查看osd延时
# ceph osd perf
输出信息:
osd commit_latency(ms) apply_latency(ms)
0 0 0
五、PG组
1、查看pg组的映射信息
# ceph pg dump
输出信息:
dumped all
version 607481
stamp 2020-10-09 11:16:32.865738
last_osdmap_epoch 0
last_pg_scan 0
full_ratio 0
nearfull_ratio 0
PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES LOG DISK_LOG STATE STATE_STAMP VERSION REPORTED UP UP_PRIMARY ACTING ACTING_PRIMARY LAST_SCRUB SCRUB_STAMP LAST_DEEP_SCRUB DEEP_SCRUB_STAMP SNAPTRIMQ_LEN
11.3 22 0 0 0 0 0 1554 1554 active+clean 2020-10-08 23:40:08.999691 218'65354 218:98261 [0] 0 [0] 0 218'63422 2020-10-08 23:40:08.999638 218'58648 2020-10-07 19:21:09.086796 0
10.2 0 0 0 0 0 0 0 0 active+clean 2020-10-08 17:03:50.360386 0'0 218:145 [0] 0 [0] 0 0'0 2020-10-08 17:03:50.360073 0'0 2020-10-03 16:33:37.680982 0
9.1 1 0 0 0 0 0 15 15 active+clean 2020-10-08 08:10:13.634676 213'15 218:261552 [0] 0 [0] 0 213'15 2020-10-08 08:10:13.634540 213'15 2020-10-07 07:55:43.794747 0
3.b 0 0 0 0 0 0 0 0 active+clean 2020-10-09 08:05:07.603764 0'0 218:441 [0] 0 [0] 0 0'0 2020-10-09 08:05:07.603570 0'0 2020-10-08 04:10:55.985889 0
2.a 3 0 0 0 0 117 696 696 active+clean 2020-10-08 01:01:15.979950 218'696 218:124863 [0] 0 [0] 0 218'695 2020-10-08 01:01:15.979563 218'695 2020-10-05 09:23:23.610962 0
1.9 86 0 0 0 0 684376727 188 188 active+clean 2020-10-08 08:27:35.916078 71'188 218:836 [0] 0 [0] 0 71'188 2020-10-08 08:27:35.915814 71'188 2020-10-08 08:27:35.915814 0
4.c 7 0 0 0 0 8402530 1563 1563 active+clean 2020-10-09 05:48:38.892392 218'394363 218:672698 [0] 0 [0] 0 217'394362 2020-10-09 05:48:38.892240 217'394362 2020-10-08 04:31:51.649845 0
8.0 4 0 0 0 0 0 8 8 active+clean 2020-10-09 05:11:56.978009 162'8 218:267 [0] 0 [0] 0 162'8 2020-10-09 05:11:56.977695 162'8 2020-10-05 12:27:57.351136 0
11.2 26 0 0 0 0 0 1594 1594 active+clean 2020-10-08 12:58:06.268311 218'90994 218:136755 [0] 0 [0] 0 218'85762 2020-10-08 12:58:06.268206 218'52208 2020-10-02 13:31:54.916159 0
10.3 0 0 0 0 0 0 0 0 active+clean 2020-10-09 00:13:54.479007 0'0 218:145 [0] 0 [0] 0 0'0 2020-10-09 00:13:54.478738 0'0 2020-10-02 12:03:50.578100 0
9.0 1 0 0 0 0 0 15 15 active+clean 2020-10-09 05:20:58.550242 213'15 218:268427 [0] 0 [0] 0 213'15 2020-10-09 05:20:58.550128 213'15 2020-10-07 21:28:01.376248 0
3.a 0 0 0 0 0 0 0 0 active+clean 2020-10-08 18:27:30.668569 0'0 218:439 [0] 0 [0] 0 0'0 2020-10-08 18:27:30.668417 0'0 2020-10-07 16:04:00.976150 0
2.b 1 0 0 0 0 98 106 106 active+clean 2020-10-08 10:05:35.206002 217'106 218:703 [0] 0 [0] 0 217'106 2020-10-08 10:05:35.205802 217'106 2020-10-06 03:38:00.975439 0
1.8 86 0 0 0 0 717931008 287 287 active+clean 2020-10-08 07:06:10.779285 70'287 218:1569 [0] 0 [0] 0 70'287 2020-10-08 07:06:10.778936 70'287 2020-10-08 07:06:10.778936 0
4.d 5 0 0 0 0 16388608 1546 1546 active+clean 2020-10-09 08:30:59.011121 218'624246 218:727723 [0] 0 [0] 0 217'624240 2020-10-09 08:30:59.010908 217'624240 2020-10-06 23:57:54.087248 0
8.1 0 0 0 0 0 0 0 0 active+clean 2020-10-09 02:16:40.314110 0'0 218:249 [0] 0 [0]

文章列举了在Ceph分布式存储系统中管理集群、监控状态、查看OSD、PG等关键组件的常用命令,包括启动ceph进程、查看mon状态、检查osd运行状态、查看pg映射信息等,涵盖了从基础监控到深入诊断的多个层面。
最低0.47元/天 解锁文章
970

被折叠的 条评论
为什么被折叠?



