CentOS7下部署ceph-12 (luminous)--多机集群

本文档详细介绍了如何在CentOS7环境下手动部署Ceph Luminous多机集群,包括准备、创建工作目录、配置文件、keyring、monmap的生成和分发,以及monitors、osd和mgr的添加和初始化步骤。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

0. 准备

前一篇点击打开链接只部署了一个单机集群。在这一篇里,手动部署一个多机集群:mycluster。我们有三台机器nod1,node2和node3;其中node1可以免密ssh/scp任意其他两台机器。我们的所有工作都在node1上完成。

准备工作包括在各个机器上安装ceph rpm包(见前一篇第1节点击打开链接),并在各个机器上修改下列文件:

/usr/lib/systemd/system/ceph-mon@.service
/usr/lib/systemd/system/ceph-osd@.service
/usr/lib/systemd/system/ceph-mds@.service
/usr/lib/systemd/system/ceph-mgr@.service
/usr/lib/systemd/system/ceph-radosgw@.service

修改:

Environment=CLUSTER=ceph                                                  <---  改成CLUSTER=mycluster
ExecStart=/usr/bin/... --id %i --setuser ceph --setgroup ceph    <--- 删掉--setuser ceph --setgroup ceph


1. 创建工作目录

在node1创建一个工作目录,后续所有工作都在node1上的这个工作目录中完成;

mkdir /tmp/mk-ceph-cluster
cd /tmp/mk-ceph-cluster

2. 创建配置文件

vim mycluster.conf
[global]
    cluster                     = mycluster
    fsid                        = 116d4de8-fd14-491f-811f-c1bdd8fac141

    public network              = 192.168.100.0/24
    cluster network             = 192.168.73.0/24

    auth cluster required       = cephx
    auth service required       = cephx
    auth client required        = cephx

    osd pool default size       = 3
    osd pool default min size   = 2

    osd pool default pg num     = 128
    osd pool default pgp num    = 128

    osd pool default crush rule = 0
    osd crush chooseleaf type   = 1

    admin socket                = /var/run/ceph/$cluster-$name.asock
    pid file                    = /var/run/ceph/$cluster-$name.pid
    log file                    = /var/log/ceph/$cluster-$name.log
    log to syslog               = false

    max open files              = 131072
    ms bind ipv6                = false

[mon]
    mon initial members = node1,node2,node3
    mon host = 192.168.100.131:6789,192.168.100.132:6789,192.168.100.133:6789

    ;Yuanguo: the default value of {mon data} is /var/lib/ceph/mon/$cluster-$id,
    ;         we overwrite it.
    mon data                     = /var/lib/ceph/mon/$cluster-$name
    mon clock drift allowed      = 10
    mon clock drift warn backoff = 30

    mon osd full ratio           = .95
    mon osd nearfull ratio       = .85

    mon osd down out interval    = 600
    mon osd report timeout       = 300

    debug ms                     = 20
    debug mon                    = 20
    debug paxos                  = 20
    debug auth                   = 20
    mon allow pool delete      = true  ; without this, you cannot delete pool
[mon.node1]
    host                         = node1
    mon addr                     = 192.168.100.131:6789
[mon.node2]
    host                         = node2
    mon addr                     = 192.168.100.132:6789
[mon.node3]
    host                         = node3
    mon addr                     = 192.168.100.133:6789

[mgr]
    ;Yuanguo: the default value of {mgr data} is /var/lib/ceph/mgr/$cluster-$id,
    ;         we overwrite it.
    mgr data                     = /var/lib/ceph/mgr/$cluster-$name

[osd]
    ;Yuanguo: we wish to overwrite {osd data}, but it seems that 'ceph-disk' forces
    ;     to use the default value, so keep the default now; may
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值