集群安装

这里使用EXT4作为集群的文件系统,并且为了方便测试关掉了ceph的安全认证。

安装依赖包:

yum -y install gcc gcc-c++ make automake libtool expat expat-devel \

boost-devel nss-devel cryptopp cryptopp-devel libatomic_ops-devel \

fuse-devel gperftools-libs gperftools-devel libaio libaio-devel libedit libedit-devel libuuid-devel

源码安装:

下载源码:http://ceph.com/download/ceph-0.48argonaut.tar.gz

解压包进行安装:

tar -zxvf ceph-0.48argonaut.tar.gz

cd ceph-0.48argonaut

./autogen.sh

CXXFLAGS="-g -O2" ./configure --prefix=/usr --sbindir=/sbin --localstatedir=/var --sysconfdir=/etc --without-tcmalloc

make && make install

rpm安装方法:

#wget http://ceph.com/download/ceph-0.48argonaut.tar.bz2

#tar xjvf ceph-0.48argonaut.tar.bz2

#cp ceph-0.48argonaut.tar.bz2 ~/rpmbuild/SOURCES

#rpmbuild -ba ceph-0.48argonaut/ceph.spec

#cd  /root/rpmbuild/RPMS/x86_64/

#rpm  -Uvh 

修改配置文件ceph.conf

[global]

        ; enable secure authentication

        ;auth supported = cephx

 

        ; allow ourselves to open a lot of files

        max open files = 131072

 

        ; set log file

        log file = /var/log/ceph/$name.log

        ; log_to_syslog = true        ; uncomment this line to log to syslog

 

        ; set up pid files

        pid file = /var/run/ceph/$name.pid

 

        ; If you want to run a IPv6 cluster, set this to true. Dual-stack isn't possible

        ;ms bind ipv6 = true

 

[mon]

        mon data = /ceph/mon_data/$name

        debug ms = 1

        debug mon = 20

        debug paxos = 20

        debug auth = 20

 

[mon.0]

        host = node89

        mon addr = 1.1.1.89:6789

 

[mon.1]

        host = node97

        mon addr = 1.1.1.97:6789

 

[mon.2]

        host = node56

        mon addr = 1.1.1.56:6789

 

; mds

;  You need at least one.  Define two to get a standby.

[mds]

        ; where the mds keeps it's secret encryption keys

        keyring = /ceph/mds_data/keyring.$name

 

        ; mds logging to debug issues.

        debug ms = 1

        debug mds = 20

 

[mds.0]

        host = node89

 

[mds.1]

        host = node97

 

[mds.2]

        host = node56

 

[osd]

        osd data = /ceph/osd_data/$name

        filestore xattr use omap = true

        osd journal = /ceph/osd_data/$name/journal

        osd journal size = 1000 ; journal size, in megabytes

        journal dio = false

        debug ms = 1

        debug osd = 20

        debug filestore = 20

        debug journal = 20

        filestore fiemap = false

        osd class dir = /usr/lib/rados-classes

        keyring = /etc/ceph/keyring.$name

 

[osd.0]

        host = node89   

        devs = /dev/mapper/vg_control-lv_home

 

[osd.1]

        host = node97

        devs = /dev/mapper/vg_node2-lv_home

 

[osd.2]

        host = node56

        devs = /dev/mapper/vg_node56-lv_home

安装脚本:

#!/bin/bash

yum -y install gcc gcc-c++ make automake libtool expat expat-devel \

boost-devel nss-devel cryptopp cryptopp-devel libatomic_ops-devel \

fuse-devel gperftools-libs gperftools-devel libaio libaio-devel libedit libedit-devel libuuid-devel

which ceph > /dev/null 2>&1

if [ $? -eq 1 ]; then

    tar -zxvf ceph-0.48argonaut.tar.gz

    cd ceph-0.48argonaut

    ./autogen.sh

    CXXFLAGS="-g -O2" ./configure --prefix=/usr --sbindir=/sbin --localstatedir=/var --sysconfdir=/etc --without-tcmalloc

    make && make install

    cd ..

    rm -rf ceph-0.48argonaut

fi

echo "#####################Configure#########################"

rm -rf /ceph/*

rm -rf /etc/ceph/*

mkdir -p /ceph/mon_data/{mon.0,mon.1,mon.2}

mkdir -p /ceph/osd_data/{osd.0,osd.1,osd.2}

mkdir -p /ceph/mds_data

touch /etc/ceph/keyring

touch /etc/ceph/ceph.keyring

touch /etc/ceph/keyring.bin

cp ceph.conf /etc/ceph/

echo "#####################Iptables##########################"

grep 6789 /etc/sysconfig/iptables

if [ $? -eq 1 ];then

  iptables -A INPUT -m multiport -p tcp --dports 6789,6800:6810 -j ACCEPT

  service iptables save

  service iptables restart

fi

echo "######################Init service#####################"

#mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.admin

#service ceph restart

echo "Install Ceph Successful!"

在监控节点上初始化集群(需要多次输入节点登陆密码,可以改为ssh认证):

mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.admin

执行此命令后会在osdmon文件下生成相关文件,如果没有生成则说明初始化没有成功,启动ceph服务时会报错,注意,如果使用btrfs格式时,不需要手动挂载osd文件,如果使用ext4文件格式,需要手动挂载osd

在所有节点上启动服务:

service ceph restart

检测集群运行状态:

#ceph health detail

#ceph –s

#ceph osd tree

#ceph osd dump

集群操作

这里在已有的逻辑卷组上创建一个新的逻辑分区用来做实验:

#vgs

VG         #PV #LV #SN Attr   VSize   VFree 

  vg_control   1   4   0 wz--n- 931.02g 424.83g

#lvcreate --size 10g --name ceph_test  vg_control

#mkfs.ext4 /dev/mapper/ vg_control-ceph_test

#lvs

1. 扩张节点步骤:

ceph osd create

ceph.conf配置文件添加:

[osd.3]

    host = newnode

    devs = /dev/mapper/vg_control-ceph_test

格式化、挂载osd

#mkfs.ext4 /dev/mapper/ vg_control-ceph_test

#mkdir  /ceph/osd_data/osd.3

#mount  -o  user_xattr  /dev/mapper/vg_control-ceph_test  /ceph/osd_data/osd.3

这样,新的osd就添加成功了,但是如果要正常使用它,就必须添加映射关系(用来调度osd的关系):

#ceph osd crush set 3 osd.3 1.0 pool=default host=newnode

# ceph-osd -c /etc/ceph/ceph.conf --monmap /tmp/monmap -i 3 -–mkfs

2. 删除osd.3节点:

#ceph osd crush remove osd.3

#ceph osd tree

dumped osdmap tree epoch 21

# id    weight  type name       up/down reweight

-1      3       pool default

-3      3               rack unknownrack

-2      1                       host node89

0       1                               osd.0   up      1

-4      1                       host node97

1       1                               osd.1   up      1

-5      1                       host node56

2       1                               osd.2   up      1

 

3       0       osd.3   down    0

#ceph osd rm 3

#ceph osd tree

dumped osdmap tree epoch 22

# id    weight  type name       up/down reweight

-1      3       pool default

-3      3               rack unknownrack

-2      1                       host node89

0       1                               osd.0   up      1

-4      1                       host node97

1       1                               osd.1   up      1

-5      1                       host node56

2       1                               osd.2   up      1

#rm -r /ceph/osd_data/osd.3/

  

修改ceph.conf,删除osd.3相关的内容。

3. osd.0挂载新盘

#service ceph stop osd

#umount /ceph/osd_data/osd.0

# mkfs.ext4 /dev/mapper/vg_control-ceph_test

# tune2fs -o journal_data_writeback /dev/mapper/vg_control-ceph_test

# mount -o rw,noexec,nodev,noatime,nodiratime,user_xattr,data=writeback,barrier=0 /dev/mapper/vg_control-ceph_test /ceph/osd_data/osd.0

fstab中添加如下:

/dev/mapper/vg_control-ceph_test /ceph/osd_data/osd.0 ext4 rw,noexec,nodev,noatime,nodiratime,user_xattr,data=writeback,barrier=0 0 0

加入监控:

#mount -a

#ceph mon getmap -o /tmp/monmap

#ceph-osd -c /etc/ceph/ceph.conf --monmap /tmp/monmap -i 0 -–mkfs

#service ceph start osd

4. 客户端挂载

在装有ceph-clietn的客户端进行挂载ceph集群的osd

#mount –t ceph 1.1.1.89:6789:/ /mnt

#df –h

1.1.1.89:6789:/   100G  3.1G   97G   4% /mnt