1.基础环境配置
主机名 | IP地址 | 节点 |
ceph-node1 | 192.168.100.10 | osd/mon |
ceph-node2 | 192.168.100.20 | osd |
ceph-node3 | 192.168.100.30 | osd |
ceph-client | 192.168.100.40 | 客户端 |
(1)创建四台虚拟机,节点为192.168.100.10(20,30,40)。前三台服务器节点上(10,20,30)各添加1块50GB的STAT类型硬盘sdb,更改主机名,此处以ceph-node1节点为例
[root@localhost ~]# hostnamectl set-hostname ceph-node1
[root@localhost ~]# bash
[root@ceph-node1 ~]#
(2)修改四台虚拟机的/etc/hosts文件,修改主机名地址映射关系,此处以ceph-node1为例
[root@ceph-node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.10 ceph-node1
192.168.100.20 ceph-node2
192.168.100.30 ceph-node3
192.168.100.40 ceph-client
(3)在所有Ceph节点上使用FTP配置本地yum源
ceph-node1节点配置
[root@ceph-node1 ~]# mkdir /opt/centos
[root@ceph-node1 ~]# mount CentOS-7-x86_64-DVD-1804.iso /opt/centos
[root@ceph-node1 ~]# mv /etc/yum.repos.d/* /media/
[root@ceph-node1 ~]# vi /etc/yum.repos.d/local.repo
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
在ceph-node1节点上安装并配置vdftpd服务
[root@ceph-node1 ~]# yum install -y vsftpd
[root@ceph-node1 ~]# vi /etc/vsftpd/vsftpd.conf
//将anon_root=/opt/添加在vsftpd.conf的第1行
[root@ceph-node1 ~]# systemctl restart vsftpd
[root@ceph-node1 ~]# systemctl stop firewalld
[root@ceph-node1 ~]# systemctl disable firewalld
[root@ceph-node1 ~]# setenforce 0
ceph-node2、ceph-node3和ceph-client主机配置相同:
[root@ceph-node2 ~]# mv /etc/yum.repos.d/* /media/
[root@ceph-node2 ~]# vi /etc/yum.repos.d/local.repo
[centos]
name=centos
baseurl=ftp://192.168.100.10/centos
gpgcheck=0
enabled=1
2.安装和配置Ceph
(1)在ceph-node1上安装ceph-deploy,并用ceph-deploy创建一个Ceph集群
[root@ceph-node1 ~]# yum install ceph-deploy -y
[root@ceph-node1 ~]# mkdir /etc/ceph
[root@ceph-node1 ~]# cd /etc/ceph
[root@ceph-node1 ceph]#ceph-deploy new ceph-node1
(2)在ceph-node1上创建新集群成功后,可以看到生成的集群配置文件和密钥文件
[root@ceph-node1 ceph]# ll
total 12
-rw-r--r--. 1 root root 234 Aug 16 17:46 ceph.conf
-rw-r--r--. 1 root root 2985 Aug 16 17:46 ceph.log
-rw-------. 1 root root 73 Aug 16 17:46 ceph.mon.keyring
(3)在ceph-node1上使用ceph-deploy工具在所有Ceph节点上安装Ceph二进制软件包(期间有提示输入yes即可)
[root@ceph-node1 ceph]# ceph-deploy install ceph-node1 ceph-node2 ceph-node3 ceph-client
[root@ceph-node1 ceph]# ceph -v
ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
(4)在ceph-node1上创建Ceph monitor
[root@ceph-node1 ceph]# ceph-deploy --overwrite-conf mon create-initial
(5)Monitor创建成功后,检查集群状态
此时Ceph集群处于不健康状态
3.创建OSD
(1)列出ceph-node1上所有的可用硬盘
[root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
(2)使用(parted) mklabel msdos命令将硬盘转换为MBR后创建共享硬盘(注意:3个节点都要执行如下操作),以ceph-node1主机为例
[root@ceph-node1 ceph]# parted /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel msdos
(parted) p
Model: ATA VMware Virtual S (scsi)
Disk /dev/sdb: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
(parted) mklabel
New disk label type? gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) p
Model: ATA VMware Virtual S (scsi)
Disk /dev/sdb: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
(parted) mkpart
Partition name? []?
File system type? [ext2]?
Start? 0%
End? 100%
(parted) p
Model: ATA VMware Virtual S (scsi)
Disk /dev/sdb: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53.7GB 53.7GB
(parted) q
Information: You may need to update /etc/fstab.
查看分区情况
使用命令对分区进行格式化
ceph-node1配置
ceph-node2配置和ceph-node3配置
[root@ceph-node2 ~]# mkfs.xfs /dev/sdb1
[root@ceph-node2 ~]# mkdir /opt/osd2
[root@ceph-node2 ~]# mount /dev/sdb1 /opt/osd2
[root@ceph-node2 ~]# chmod 777 /opt/osd2
[root@ceph-node3 ~]# mkfs.xfs /dev/sdb1
[root@ceph-node3 ~]# mkdir /opt/osd3
[root@ceph-node3 ~]# mount /dev/sdb1 /opt/osd3
[root@ceph-node3 ~]# chmod 777 /opt/osd3
(3)在node1节点使用ceph-deploy工具创建OSD节点
[root@ceph-node1 ceph]# ceph-deploy osd prepare ceph-node1:/opt/osd1 ceph-node2:/opt/osd2 ceph-node3:/opt/osd3
(4)先关闭ceph-node2和ceph-node3节点的防火墙,再在node1节点使用ceph-deploy工具激活OSD节点
[root@ceph-node2 ~]# systemctl stop firewalld
[root@ceph-node2 ~]# systemctl disable firewalld
[root@ceph-node3 ~]# systemctl stop firewalld
[root@ceph-node3 ~]# systemctl disable firewalld
[root@ceph-node1 ceph]# ceph-deploy osd activate ceph-node1:/opt/osd1 ceph-node2:/opt/osd2 ceph-node3:/opt/osd3
(5)检查Ceph集群的状态
此时集群处于HEALTH_OK状态
(6)开放权限给其他节点,进行灾备处理
[root@ceph-node1 ceph]# ceph-deploy admin ceph-node{1,2,3}
[root@ceph-node1 ceph]# chmod +r /etc/ceph/ceph.client.admin.keyring
4.创建ceph块设备并挂载
(1)用ceph-deploy工具把Ceph配置文件和ceph.client.admin.keyring复制到ceph-client
[root@ceph-node1 ceph]# sudo chmod +r /etc/ceph/ceph.client.admin.keyring
[root@ceph-node1 ceph]# ceph-deploy admin ceph-client
(2)在ceph-client节点上创建块设备并挂载
使用rbd create命令创建一个块设备image,然后用rbd map命令把image映射为块设备
对映射出来的/dev/rbd0格式化并挂载
[root@ceph-client ~]# mkfs.ext4 /dev/rbd0
[root@ceph-client ~]# mount /dev/rbd0 /mnt/
(3)调整块设备的大小
使用—size选项调整块设备映像的最大尺寸
使用resize2fs命令调整文件系统的大小
5.删除块设备
问题:使用rdb rm命令删除块设备失败
解决:在ceph-node1上查看rbd残留的watch信息,并将其添加到osd的黑名单后,再在ceph-client上删除