环境说明
操作环境
openstack icehouse
ceph 0.87
fio 2.0.13
openstack&ceph配置
openstack nova添加以下配置
disk_cachemode=‘network=writeback’
ceph启动rbd cache
[client]
rbd cache = true
rbd cache size = 32MB
rbd cache max dirty = 24MB
rbd cache target dirty = 16MB
rbd cache max dirty age = 1
rbd cache writethrough until flush = true
fio测试脚本
通过fio主要进行顺序读、顺序写、随机读、随机写的性能测试
#!/bin/bash
function tgt_r {
fio -filename=/dev/vdb -direct=1 -iodepth 1 -thread -rw=read -ioengine=libaio -bs=$1 -size=10G -numjobs=30 -runtime=300 -group_reporting -name=mytest &>> s_r_test
}
function tgt_w {
fio -filename=/dev/vdb -direct=1 -iodepth 1 -thread -rw=write -ioengine=libaio -bs=$1 -size=10G -numjobs=30 -runtime=300 -group_reporting -name=mytest &>> s_w_test
}
function tgt_rr {
fio -filename=/dev/vdb -direct=1 -iodepth 1 -thread -rw=randread -ioengine=libaio -bs=$1 -size=10G -numjobs=30 -runtime=300 -group_reporting -name=mytest &>> r_r_test
}
function tgt_rw {
fio -filename=/dev/vdb -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=libaio -bs=$1 -size=10G -numjobs=30 -runtime=300 -group_reporting -name=mytest &>> r_w_test
}
for i in 4k 64k 1m
do
tgt_r $i
tgt_w $i
tgt_rr $i
tgt_rw $i
done
测试步骤
1.在osd的xfs文件系统默认配置下,通过上述fio脚本进行测试
/dev/sda /osd1 xfs defaults 0 0
/dev/sdb /osd2 xfs defaults 0 0
/dev/sdc /osd3 xfs defaults 0 0
2.在osd的xfs文件系统配置优化后下,通过上述fio脚本进行测试
/dev/sda /osd1 xfs rw,noexec,nodev,noatime,nodiratime,barrier=0 0 0
/dev/sdb /osd2 xfs rw,noexec,nodev,noatime,nodiratime,barrier=0 0 0
/dev/sdc /osd3 xfs rw,noexec,nodev,noatime,nodiratime,barrier=0 0 0
测试结果
MBPS(MB/s)