Ceph 0.87 版本 部署
因为公司的需要,特地找了一下怎么安装这个版本的ceph,于是就做了把过程记录下来
在安装过程中,因为依赖包的原因困扰我挺久的,但是一个个都解决了
因为依赖包的原因,我使用半离线安装的方式,将ceph-giant版本的软件包下载下来了,本地也做了个centos7.2的源,还需要依赖到epel和extra源,这两个源用在线方式
Ceph-giant软件包的地址: http://pub.mirrors.aliyun.com/ceph/rpm-giant/
环境
主机 | 系统 | IP |
Ceph01 | CentOS7.2 | 66.66.66.235 |
Ceph02 | CentOS7.2 | 66.66.66.237 |
Ceph03 | CentOS7.2 | 66.66.66.238 |
以下操作均在三个节点上操作
构建yum源
这里有以下几个源地址,其中centos_7.2和ceph_giant的源是本地的,其他源都是网上的
# vi /etc/yum.repos.d/ceph.repo
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=http://mirrors.aliyun.com/epel/7/$basearch
failovermethod=priority
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=http://mirrors.aliyun.com/epel/7/$basearch/debug
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0
[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=http://mirrors.aliyun.com/epel/7/SRPMS
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
[centos_7.2]
name=centos_7.2
baseurl=ftp://66.66.66.235/centos_7.2
enabled=1
gpgcheck=0
[ceph_giant]
name=ceph_giant
baseurl=ftp:// 66.66.66.235/ceph_g
enabled=1
gpgcheck=0
设置主机映射
[root@ceph02 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
66.66.66.235 ceph01
66.66.66.237 ceph02
66.66.66.238 ceph03
设置ssh无秘钥访问
[root@ceph01 ~]# ssh-keygen
[root@ceph01 ~]# ssh-copy-id ceph01
[root@ceph01 ~]# ssh-copy-id ceph02
[root@ceph01 ~]# ssh-copy-id ceph03
关闭防火墙
[root@ceph02 ~]# systemctl stop firewalld
[root@ceph02 ~]# systemctl disable firewalld
[root@ceph02 ~]# setenforce 0
[root@ceph02 ~]# sed –I ‘s/=enforcing/=disabled/’ /etc/selinux/config
时间同步
[root@ceph02 ~]# yum install ntpdate -y
[root@ceph02 ~]# ntpdate 66.66.66.71
我这里是同步到我本地的一台ntp服务器,也可以同步到网上的其他ntp服务器
例如 ntpdate ntp1.aliyun.com
安装ceph软件包
[root@ceph02 ~]# yum install -y ceph ceph-radosgw ceph-debuginfo ceph-devel eph-fuse ceph-libs-compat ceph-test cephfs-java bd-fuse rest-bench
查看ceph版本
[root@ceph01 my-cluster]# ceph -v
ceph version 0.87.2 (87a7cec9ab11c677de2ab23a7668a77d2f5b955e)
以下操作在ceph01上执行
编写 /root/.ssh/config文件
Host ceph01
Hostname ceph01
User root
Host ceph02
Hostname ceph02
User root
Host ceph03
Hostname ceph03
User root
这里为了方便我用的是root用户,童鞋们也可以自己设置其他用户
创建目录
[root@ceph01 ~]# mkdir my-cluster
[root@ceph01 ~]# cd my-cluster
创建集群
[root@ceph01 my-cluster]# ceph-deploy new ceph01
编辑ceph.conf文件
[global]
fsid = 3ec3f0f5-74d7-41ce-a497-1425705cd717
mon_initial_members = ceph01
mon_host = 66.66.66.235
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd pool default size = 2
osd pool default pg num = 128
osd pool default pgp num = 128
添加初始监视器并收集密钥
[root@ceph01 my-cluster]# ceph-deploy mon create-initial
创建监视器
[root@ceph01 my-cluster]# ceph-deploy mon create ceph01
收集秘钥
[root@ceph01 my-cluster]# ceph-deploy gatherkeys ceph01
创建目录
这里是根据官网创建的目录,也可以直接激活某个分区,例如挂载了个sdb
[root@ceph01 my-cluster]# mkdir /var/local/osd1
[root@ceph01 my-cluster]# ssh ceph02 “mkdir /var/local/osd2”
[root@ceph01 my-cluster]# ssh ceph03 “mkdir /var/local/osd3”
[root@ceph01 my-cluster]# chmod 777 /var/local/osd1
[root@ceph01 my-cluster]# ssh ceph02 “chmod 777 /var/local/osd2”
[root@ceph01 my-cluster]# ssh ceph03 “chmod 777 /var/local/osd3”
准备并激活osd
[root@ceph01 my-cluster]# ceph-deploy osd prepare ceph01:/var/local/osd1 ceph02:/var/local/osd2 ceph03:/var/local/osd3
[root@ceph01 my-cluster]# ceph-deploy osd activate ceph01:/var/local/osd1 ceph02:/var/local/osd2 ceph03:/var/local/osd3
复制秘钥到其他节点,并设置权限
[root@ceph01 my-cluster]# ceph-deploy admin ceph01 ceph02 ceph03
[root@ceph01 my-cluster]# chmod 777 /etc/ceph/ceph.client*
[root@ceph01 my-cluster]# ssh ceph02 “chmod 777 /etc/ceph/ceph.client*”
[root@ceph01 my-cluster]# ssh ceph02 “chmod 777 /etc/ceph/ceph.client*”
查看集群状态
[root@ceph01 my-cluster]# ceph -s
cluster 3ec3f0f5-74d7-41ce-a497-1425705cd717
health HEALTH_OK
monmap e1: 1 mons at {ceph01=66.66.66.235:6789/0}, election epoch 1, quorum 0 ceph01
osdmap e42: 3 osds: 3 up, 3 in
pgmap v486: 320 pgs, 3 pools, 16 bytes data, 3 objects
27097 MB used, 243 GB / 269 GB avail
320 active+clean
查看到集群状态是HEALTH_OK才说明是成功,如果是HEALTH_WARN状态,那就要看看是哪步出了问题,因为本人初学,所以无法给出建议