使用ceph-deploy部署ceph集群

本文档详细介绍了如何为Ceph集群做准备,包括设置时钟同步使用阿里云NTP服务器,配置Ubuntu服务器使用阿里云镜像源,创建并配置ceph用户及sudo免密,以及在部署节点和各服务器间实现SSH免密登录,最后确保主机名解析正确。这些步骤是构建稳定Ceph集群的基础。

一、ceph集群架构

二、准备工作

2.1 配置时钟同步

root@ceph-deploy:~# crontab -e
# 添加
*/5 * * * * /usr/sbin/ntpdate  time1.aliyun.com  &> /dev/null &&  hwclock  -w &> /dev/null

root@ceph-deploy:~# cat /etc/hosts
# 添加
# 阿里云ntp服务器
120.25.115.20    ntpdate ntp1.aliyun.com

2.2 准备镜像仓库

    涉及所有10台服务器

root@ceph-deploy:~# cat /etc/hosts
# 添加
183.204.196.229 mirrors.aliyun.com

root@ceph-deploy:~# cat /etc/apt/sources.list
# 配置阿里云镜像源头
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse

root@ceph-deploy:~# apt update
Hit:1 http://mirrors.aliyun.com/ubuntu bionic InRelease
Get:2 http://mirrors.aliyun.com/ubuntu bionic-security InRelease [88.7 kB]
Get:3 http://mirrors.aliyun.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:4 http://mirrors.aliyun.com/ubuntu bionic-proposed InRelease [242 kB]
Get:5 http://mirrors.aliyun.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:6 http://mirrors.aliyun.com/ubuntu bionic-updates/main amd64 Packages [2,249 kB]                                                                                                                          
Get:7 http://mirrors.aliyun.com/ubuntu bionic-updates/main i386 Packages [1,360 kB]                                                                                                                           
Fetched 4,103 kB in 15s (275 kB/s)                                                                                                                                                                            
Reading package lists... Done
Building dependency tree       
Reading state information... Done
159 packages can be upgraded. Run 'apt list --upgradable' to see them.

2.3 创建ceph用户,并配置sudo免密

    涉及10台服务器

root@ceph-deploy:~# groupadd -r -g 2022 ceph && useradd -r -m -s /bin/bash -u 2022 -g 2022 ceph && echo ceph:123456 | chpasswd

root@ceph-deploy:~# echo "ceph    ALL=(ALL)    NOPASSWD:ALL" >> /etc/sudoers

2.4 在deploy节点配置免密登陆

    2.4.1 在deploy节点操作

root@ceph-deploy:~# su - ceph
ceph@ceph-deploy:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph/.ssh/id_rsa):
Created directory '/home/ceph/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ceph/.ssh/id_rsa.
Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:akbiDJMo4dk15ClvMeAQJ2MsX1f+t0+VvNA3ty/e5ks ceph@ceph-deploy.example.local
The key's randomart image is:
+---[RSA 2048]----+
|.*.o . ..        |
|o.* = o.         |
|o. + O  .        |
|.o+.+ +  .    o .|
|oo+..o. S . .. =+|
|.  =.o .   . ...*|
|    o +     . .E |
|     o       oo.o|
|             .o==|
+----[SHA256]——+

    2.4.2 将秘钥拷贝到其他服务器上

ceph@ceph-deploy:~$ ssh-copy-id ceph@192.168.2.20
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ceph/.ssh/id_rsa.pub"
The authenticity of host '192.168.2.20 (192.168.2.20)' can't be established.
ECDSA key fingerprint is SHA256:9kHyC5k68pyboHx6VtTk2Id+y5UEBN3P0ZyM0srTZBc.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ceph@192.168.2.20's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'ceph@192.168.2.20'"
and check to make sure that only the key(s) you wanted were added.

其他服务器也执行如下操作
   ssh-copy-id ceph@192.168.2.21
   ssh-copy-id ceph@192.168.2.22
   ssh-copy-id ceph@192.168.2.23
   ssh-copy-id ceph@192.168.2.24
   ssh-copy-id ceph@192.168.2.25
   ssh-copy-id ceph@192.168.2.26
   ssh-copy-id ceph@192.168.2.27
   ssh-copy-id ceph@192.168.2.28
   ssh-copy-id ceph@192.168.2.29

2.5 配置主机名解析

    涉及10台服务器

root@ceph-deploy:~# cat /etc/hosts
# 添加
101.6.15.130    mirrors.tuna.tsinghua.edu.cn
183.204.196.229 mirrors.aliyun.com
# 阿里云ntp服务器
120.25.115.20    ntpdate ntp1.aliyun.com


192.168.2.20    ceph-node1.example.local ceph-node1
192.168.2.21    ceph-node2.example.local ceph-node2
192.168.2.22    ceph-node3.example.local ceph-node3
192.168.2.23    ceph-node4.example.local ceph-node4
192.168.2.24    ceph-mgr1.example.local ceph-mgr1
192.168.2.25    ceph-mgr2.example.local ceph-mgr2
192.168.2.26    ceph-mon1.example.local ceph-mon1
192.168.2.27    ceph-mon2.example.local ceph-mon2
192.168.2.28    ceph-mon3.example.local ceph-mon3
192.168.2.29   ceph-deploy.example.local ceph-deploy

<think>嗯,用户想在麒麟V10操作系统上用ceph-deploy部署Ceph集群。首先,我需要确认ceph-deploy的版本是否支持Kylin V10。根据引用[4],用户提到ceph-deploy-1.5.39版本没有mgr的部署选项,可能需要注意版本兼容性。Ceph L版本12.2.12可能需要特定版本的ceph-deploy,可能需要用户检查官方文档或社区支持。 然后,用户的环境准备方面,需要确保所有节点的主机名解析正确,SSH免密登录配置好,以及防火墙和SELinux的设置。引用[3]中提到磁盘必须未使用且没有数据,所以在部署OSD时要注意这一点。另外,引用[2]指出Ceph的核心组件,包括Monitor、Manager、OSD等,部署时需要按照步骤依次安装这些组件。 可能遇到的问题包括网络无法访问Ceph官网,导致依赖包安装失败。这时候可能需要配置本地源或者使用离线安装包。另外,麒麟V10是基于CentOS的,可能需要处理依赖包冲突,比如Python版本的问题。引用[1]提到ceph-deploy的配置文件位置,用户可能需要根据情况调整配置文件。 在部署步骤中,初始化Monitor节点是关键,如果失败需要检查时钟同步和主机名解析。创建OSD时,确保磁盘正确,可能需要使用ceph-deploy osd create命令,如引用[3]中的例子。最后,部署MGR和MDS也是必须的,但引用[4]指出旧版可能不支持,所以需要确认ceph-deploy版本是否包含这些功能。 测试集群状态时,ceph -s命令可以查看健康状况,如果出现错误,可能需要检查各个服务的日志。常见问题如网络问题、权限问题或者依赖缺失都需要逐步排查。用户可能需要参考引用中的配置文件和命令示例,确保每一步都正确执行。</think>### 在Kylin V10中使用ceph-deploy部署Ceph集群的步骤 #### 一、环境准备 1. **节点规划** - 至少3个节点(1个`ceph-deploy`管理节点,1个Monitor节点,多个OSD节点) - 示例配置: - 管理节点:`admin-node` - Monitor节点:`ceph-mon1` - OSD节点:`ceph-osd1`, `ceph-osd2` 2. **系统配置** - 所有节点配置主机名解析(`/etc/hosts`或DNS) - 配置SSH免密登录(管理节点到其他节点) - 关闭防火墙和SELinux: ```bash systemctl stop firewalld setenforce 0 ``` 3. **依赖安装** - 在管理节点安装`ceph-deploy`: ```bash yum install -y python3-pip pip3 install ceph-deploy==2.0.1 # 选择兼容Kylin V10的版本[^4] ``` #### 二、部署Ceph集群 1. **初始化集群配置** ```bash mkdir ceph-cluster && cd ceph-cluster ceph-deploy new ceph-mon1 # 生成初始配置文件[^1] ``` 2. **安装Ceph组件** ```bash ceph-deploy install --release luminous admin-node ceph-mon1 ceph-osd1 ceph-osd2 ``` > 注意:若网络受限,需提前配置本地Yum源[^4] 3. **部署Monitor** ```bash ceph-deploy mon create-initial ceph-deploy admin admin-node ceph-mon1 # 同步配置文件[^2] ``` 4. **部署OSD** - 清理磁盘(以`/dev/sdb`为例): ```bash ceph-deploy disk zap ceph-osd1 /dev/sdb[^3] ``` - 创建OSD: ```bash ceph-deploy osd create --data /dev/sdb ceph-osd1 ceph-deploy osd create --data /dev/sdb ceph-osd2 ``` 5. **部署MGR和MDS(可选)** ```bash ceph-deploy mgr create ceph-mon1 ceph-deploy mds create ceph-mon1 ``` #### 三、验证集群状态 ```bash ssh ceph-mon1 ceph -s # 检查HEALTH_OK状态 ``` #### 四、常见问题处理 | 问题现象 | 解决方案 | |---------|----------| | `ceph-deploy`安装失败 | 检查Python3版本,建议3.6+ | | Monitor启动超时 | 确认节点时钟同步(NTP服务) | | OSD无法创建 | 检查磁盘是否未分区(使用`lsblk`验证)[^3] | ###
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值