centOS下安装tfs基本步骤备份

本文档详细介绍了在CentOS系统中安装TFS(淘宝文件系统)的步骤,包括安装依赖软件包、tb-common-utils、解决编译错误以及配置和启动TFS服务。关键步骤涉及设置TBLIB_ROOT环境变量,使用低版本gcc规避编译问题,以及配置和启动nameserver和DataServer。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

rpm -i libstdc++-devel-4.4.7-11.el6.x86_64.rpm
rpm -i libstdc++-4.4.7-11.el6.x86_64.rpm
rpm -i gcc-c++-4.4.7-11.el6.x86_64.rpm
rpm -i gcc-4.4.7-11.el6.x86_64.rpm
rpm -i autoconf-2.63-5.1.el6.noarch.rpm
rpm -i automake-1.11.1-4.el6.noarch.rpm
rpm -i libtool-2.2.6-15.5.el6.x86_64.rpm
rpm -i ncurses-devel-5.7-3.20090208.el6.x86_64.rpm
rpm -i readline-6.0-4.el6.x86_64.rpm
rpm -i readline-devel-6.0-4.el6.x86_64.rpm
rpm -i zlib-1.2.3-29.el6.x86_64.rpm
rpm -i zlib-devel-1.2.3-29.el6.x86_64.rpm
rpm -i e2fsprogs-1.41.12-21.el6.x86_64.rpm
rpm -i uuid-1.6.1-10.el6.x86_64.rpm
rpm -i uuid-devel-1.6.1-10.el6.x86_64.rpm

rpm -i libuuid-devel-2.17.2-12.18.el6.x86_64.rpm


TFS使用tb-common-utils软件包,tb-common-utils包含淘宝使用的基础系统库tbsys和网络库tbnet两个组件;安装tb-common-utils前需要设置环境变量TBLIB_ROOT,tbsys和tbnet将会被安装TBLIB_ROOT对应的路径(必须是绝对路径)下,TFS会在这个路径下查找tbsys、tbnet头文件和库。

设置TBLIB_ROOT环境变量
export TBLIB_ROOT=/usr/local/tb/lib

export LD_LIBRARY_PATH=/usr/local/gcc-4.1.2/lib

export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:/home/tb-common-utils_18/tbsys/src:/home/tb-common-utils_18/tbnet/src
export echo=echo
在~/.bash_profile文件中加入,export TBLIB_ROOT=/usr/local/tb , 然后执行source ~/.bash_profile
# cd tb-common-utils

#cd tbnet

#./configure

#cd tbsys

#./configure

# sh build.sh

# cd tfs
# sh build.sh init
# ./configure --prefix=/usr/local/tfs --with-release
# make
# make install

报错解决第一步,高版本的gcc4.4.7,tfs一般要用低版本的gcc4.1.2,但是高版本呢的也可以解决报错问题,首先
用# find ./ -name Makefile | xargs sed -i 's/-Werror//'
第二步
# vi ./src/common/session_util.h
第一行添加如下:
 #include <stdint.h>
 第三步
 # vi ./src/name_meta_server/meta_server_service.cpp
1584行修改如下:
char* pos = (char *) strstr(sub_dir, parents_dir);
然后再make make install


一、安装依赖的软件包:
1、automake TFS基于automake工具构建:

yum install automake.noarch

2、libtool automake需要使用libtool:

yum install libtool

3、realine 用于命令行编辑的库:

yum install readline-devel

4、libz-devel 用于数据压缩/解压缩:

yum install zlib-devel

5、uuid-devel 用于生成全局唯一ID:

yum install e2fsprogs-devel
yum install libuuid-devel

6、tcmalloc google的内存管理库(由玩googl被封,那就暂且跳过吧,可选)

二、安装tb-common-utils
TFS使用tb-common-utils软件包,tb-common-utils包含淘宝使用的基础系统库tbsys和网络库tbnet两个组件;安装tb-common-utils前需要设置环境变量TBLIB_ROOT,tbsys和tbnet将会被安装TBLIB_ROOT对应的路径(必须是绝对路径)下,TFS会在这个路径下查找tbsys、tbnet头文件和库。
设置TBLIB_ROOT环境变量:
1、在~/.bash_profile文件中加入,export TBLIB_ROOT=path_to_tbutil , 然后执行source:

~/.bash_profile

vi ~/.bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
	. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH
export TBLIB_ROOT=/usr/local/tb-common-utils

# source ~/.bash_profile

2、下载源码:

# svn co -r 18 http://code.taobao.org/svn/tb-common-utils/trunk tb-common-utils

注意: 这里不要checkout最新版本,version18以后的修改导致部分接口不能前向兼容

# cd /usr/local/tb-common-utils/
# sh build.sh

checking for C++ compiler default output file name... 
configure: error: in `/usr/local/tb-common-utils/tbnet':
configure: error: C++ compiler cannot create executables
See `config.log' for more details.
make: *** No targets specified and no makefile found.  Stop.
make: *** No rule to make target `install'.  Stop.

这个错误是因为缺少gcc-c++:

yum install gcc-c++

如果一切顺利,tb-common-utils已经安装成功到$TBLIB_ROOT路径下;

三、安装TFS
TFS开源用户大都只使用TFS的基本功能,所以这个版本我们默认只编译TFS的nameserver,dataserver,client和tool,以去除对mysql的依赖,需要使用到rcserver(全局资源管理服务),metaserver(TFS自定义文件名服务)的用户请自行编译安装这两个服务。
下载源码并安装:

# svn co http://code.taobao.org/svn/tfs/branches/dev_for_outer_users tfs
# cd tfs/
# sh build.sh init
+ aclocal
+ libtoolize --force --copy
libtoolize: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.ac and
libtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree.
libtoolize: Consider adding `-I m4' to ACLOCAL_AMFLAGS in Makefile.am.
libtoolize: `AC_PROG_RANLIB' is rendered obsolete by `LT_INIT'
+ autoconf --force
+ automake --foreign --copy --add-missing
configure.ac:16: installing `./config.guess'
configure.ac:16: installing `./config.sub'
configure.ac:8: installing `./install-sh'
configure.ac:8: installing `./missing'
src/adminserver/Makefile.am: installing `./depcomp'
# ./configure --prefix=/usr/local/taobaoFS --with-release

checking for tc_cfree in -ltcmalloc... no
configure: error: in `/usr/local/tfs/tfs':
configure: error: tcmalloc link failed (--without-tcmalloc to disable)
See `config.log' for more details.

注意,这里报错了,是因为之前我们没有安装tcmalloc,所以这里要改一下编译条件:

# ./configure --prefix=/usr/local/taobaoFS --with-release --without-tcmalloc
# make
# make install

–prefix 指定tfs安装路径,默认会被安装到~/tfs_bin目录
–with-release 指定按release版本的参数进行编译,如果不指定这个参数,则会按开发版本比较严格的参数编译,包含-Werror参数,所有的警告都会被当错误,在高版本gcc下会导致项目编译不过,很多开源用户反馈的编译问题都跟这个有关,因为gcc高版本对代码的检查越来越严格,淘宝内部使用的gcc版本是gcc-4.1.2。

四、配置文件:
ns.conf:

# cd /usr/local/taobaoFS/conf/
# cat ns.conf | grep -v ^# | grep -v ^$
[public]
log_size=1073741824
log_num = 8 
log_level=info
task_max_queue_size = 10240
port = 8108
work_dir=/usr/local/taobaoFS
dev_name= eth0
thread_count =32 
ip_addr = 192.168.1.63
[nameserver]
safe_mode_time = 360
ip_addr_list = 192.168.1.63|192.168.0.2
group_mask = 255.255.255.255
max_write_timeout = 3
cluster_id = 1 
block_max_use_ratio = 98
block_max_size = 75497472 
max_replication = 2
min_replication = 2
replicate_ratio = 50
max_write_filecount = 64 
use_capacity_ratio = 96
heart_interval = 4
object_dead_max_time = 300
object_clear_max_time = 180 
heart_thread_count = 4 
heart_max_queue_size = 2048
report_block_thread_count = 6
report_block_max_queue_size = 32
report_block_hour_range = 2~4
report_block_time_interval = 1
repl_wait_time = 180
compact_delete_ratio =  10  
compact_max_load = 200
compact_hour_range = 1~10
dump_stat_info_interval = 60000000 
balance_percent = 0.05
add_primary_block_count = 3
task_percent_sec_size = 200 
oplog_sync_max_slots_num = 1024
oplog_sync_thread_num = 1
group_count = 1
group_seq  = 0
discard_newblk_safe_mode_time = 360 
choose_target_server_random_max_num = 128

ds.conf:

# cat ds.conf | grep -v ^# | grep -v ^$
[public]
log_size=1073741824
log_num = 8
log_level=info
task_max_queue_size = 10240
port = 8200 
work_dir=/usr/local/taobaoFS
dev_name= eth0
thread_count = 32 
ip_addr = 192.168.1.64
[dataserver]
ip_addr = 192.168.1.63
ip_addr_list = 192.168.1.63|192.168.0.2 
port = 8108
heart_interval = 2
check_interval = 2
replicate_threadcount = 2
block_max_size = 75497472
dump_visit_stat_interval = 60
backup_type = 1
mount_name = /data/disk
mount_maxsize = 768959044
base_filesystem_type = 1
superblock_reserve = 0
avg_file_size = 15360
mainblock_size = 75497472 
extblock_size = 4194304
block_ratio = 0.5
hash_slot_ratio = 0.5

第三台服务器的ds配置如下:

# cat ds.conf | grep -v ^# | grep -v ^$
[public]
log_size=1073741824
log_num = 8
log_level=info
task_max_queue_size = 10240
port = 8200 
work_dir=/usr/local/taobaoFS
dev_name= eth0
thread_count = 32 
ip_addr = 192.168.1.66
[dataserver]
ip_addr = 192.168.1.63
ip_addr_list = 192.168.1.63|192.168.0.2 
port = 8108
heart_interval = 2
check_interval = 2
replicate_threadcount = 2
block_max_size = 75497472 
dump_visit_stat_interval = 60
backup_type = 1
mount_name = /data/disk
mount_maxsize = 768959044
base_filesystem_type = 1
superblock_reserve = 0
avg_file_size = 15360
mainblock_size = 75497472 
extblock_size = 4194304
block_ratio = 0.5
hash_slot_ratio = 0.5

五、启动说胆:
1、运行TFS
启动nameserver
执行scripts目录下的tfs:

# ./tfs start_ns
 nameserver is up SUCCESSFULLY pid: 24744

2、启动DS
现有TFS可以在一台服务器上启动多个DataServer进程。一般每个DataServer进程负责一个磁盘。
将数据盘格式化成EXT4文件系统,并挂载到/data/tfs1至/data/tfs(i),其中i为磁盘号。
启动步骤:
A、存储区预分配。执行scripts下的stfs format n (n为挂载点的序号,具体用法见stfs的Usage)。例如stfs format 2,4-6 则会对/data/tfs2,
/data/tfs4,/data/tfs5,/data/tfs6,进行预分配。运行完后会在生成/data/tfs2, /data/tfs4,/data/tfs5,/data/tfs6下预先创建主块,扩展块及相应的统计信息。

# ./stfs clear 1
 clear ds 1 SUCCESSFULLY 
 [2014-08-07 10:59:11] INFO  blockfile_manager.cpp:111 [140398366021408] clear block file system end. mount_point: /data/disk1, ret: 1
# ./stfs format 1
 format ds 1 SUCCESSFULLY 
 mount name: /data/disk1 max mount size: 768959044 base fs type: 1 superblock reserve offset: 0 main block size: 75497472 extend block size: 4194304 block ratio: 0.5 file system version: 1 avg inner file size: 15360 hash slot ratio: 0.5
[2014-08-07 12:16:15] INFO  blockfile_manager.cpp:1091 [140579583227680] super block mount point: /data/disk1.
[2014-08-07 12:16:15] INFO  blockfile_manager.cpp:1171 [140579583227680] cal block count. avail data space: 783334178816, main block count: 9338, ext block count: 18676
tag TAOBAO
mount time 1407384975
mount desc /data/disk1
max use space 787414061056
base filesystem 1
superblock reserve 0
bitmap start offset 324
avg inner file size 15360
block type ratio 0.5
main block count 9338
main block size 75497472
extend block count 18676
extend block size 4194304
used main block count 0
used extend block count 0
hash slot ratio 0.5
hash slot size 2730
first mmap size 122880
mmap size step 4096
max mmap size 3686400
version 1
[2014-08-07 12:16:15] INFO  blockfile_manager.cpp:1213 [140579583227680] cal bitmap count. item count: 28015, slot count: 3502

B、运行data server。有两种方法:
通过adminserver来启动dataserver(推荐): 执行scripts下的./tfs admin_ds
直接启动dataserver,执行scripts下的./tfs start_ds 2,4-6, 则会启动dataserver2,dataserver4,dataserver5,dataserver6

# ./tfs start_ds 1
 dataserver 1 is up SUCCESSFULLY pid: 2212
# ./tfs start_ds 2
 dataserver 2 is up SUCCESSFULLY pid: 28715 

# netstat -lantp | grep dataserver
tcp        0      0 0.0.0.0:8200                0.0.0.0:*                   LISTEN      2960/dataserver     
tcp        0      0 0.0.0.0:8201                0.0.0.0:*                   LISTEN      2960/dataserver     
tcp        0      0 192.168.1.64:36866          192.168.1.63:8108           ESTABLISHED 2960/dataserver     
tcp        0      1 192.168.1.64:38846          192.168.0.2:8108            SYN_SENT    2960/dataserver  

# netstat -lantp | grep dataserver
tcp        0      0 0.0.0.0:8203                0.0.0.0:*                   LISTEN      29636/dataserver    
tcp        0      0 0.0.0.0:8202                0.0.0.0:*                   LISTEN      29636/dataserver    
tcp        0      0 192.168.1.66:56412          192.168.1.63:8108           ESTABLISHED 29636/dataserver    
tcp        0      1 192.168.1.66:49068          192.168.0.2:8108            SYN_SENT    29636/dataserver

程序基本上启动起来了

第六、相关操作

# /usr/local/taobaoFS/bin/ssm -s 192.168.1.63:8108
show > server -b
SERVER_ADDR           CNT BLOCK 
.168.1.64:8200    194   2846  2847  2848  2849  2850  2851  2852  2853  2854  2855
  2858  2859  2860  2861  2862  2863  2864  2865
  2868  2869  2870  2871  2872  2873  2874  2875
  2878  2879  2880  2881  2882  2883  2884  2885
  2888  2889  2890  2891  2892  2893  2894  2895
  2898  2899  2900  2901  2902  2903  2904  2905
  2908  2909  2910  2911  2912  2913  2914  2915
  2918  2919  2920  2921  2922  2923  2924  2925
  2928  2929  2930  2931  2932  2933  2934  2935
  2938  2939  2940  2941  2942  2943  2944  2945
  2948  2949  2950  2951  2952  2953  2954  2955
  2958  2959  2960  2961  2962  2963  2964  2965
  2968  2969  2970  2971  2972  2973  2974  2975
  2978  2979  2980  2981  2982  2983  2984  2985
  2988  2989  2990  2991  2992  2993  2994  2995
  2998  2999  3000  3001  3002  3003  3004  3005
  3008  3009  3010  3011  3012  3013  3014  3015
  3018  3019  3020  3021  3022  3023  3024  3025
  3028  3029  3030  3031  3032  3033  3034  3035
  3038  3139
.168.1.66:8202    194   2846  2847  2848  2849  2850  2851  2852  2853  2854  2855
  2858  2859  2860  2861  2862  2863  2864  2865
  2868  2869  2870  2871  2872  2873  2874  2875
  2878  2879  2880  2881  2882  2883  2884  2885
  2888  2889  2890  2891  2892  2893  2894  2895
  2898  2899  2900  2901  2902  2903  2904  2905
  2908  2909  2910  2911  2912  2913  2914  2915
  2918  2919  2920  2921  2922  2923  2924  2925
  2928  2929  2930  2931  2932  2933  2934  2935
  2938  2939  2940  2941  2942  2943  2944  2945
  2948  2949  2950  2951  2952  2953  2954  2955
  2958  2959  2960  2961  2962  2963  2964  2965
  2968  2969  2970  2971  2972  2973  2974  2975
  2978  2979  2980  2981  2982  2983  2984  2985
  2988  2989  2990  2991  2992  2993  2994  2995
  2998  2999  3000  3001  3002  3003  3004  3005
  3008  3009  3010  3011  3012  3013  3014  3015
  3018  3019  3020  3021  3022  3023  3024  3025
  3028  3029  3030  3031  3032  3033  3034  3035
  3038  3139
show > server -w
SERVER_ADDR           CNT WRITABLE BLOCK
.168.1.64:8200     90   2849  2850  2852  2856  2857  2859  2860  2862  2865  2866
  2879  2884  2886  2887  2891  2895  2899  2900
  2906  2907  2911  2912  2913  2914  2915  2917
  2925  2927  2929  2930  2932  2934  2935  2936
  2942  2944  2945  2946  2948  2950  2951  2955
  2959  2960  2961  2964  2967  2968  2978  2979
  2984  2985  2987  2988  2990  2993  2994  3000
  3006  3007  3010  3012  3013  3014  3015  3016
  3022  3025  3027  3029  3030  3035  3038  3139
.168.1.66:8202    103   2846  2847  2848  2851  2853  2854  2855  2858  2861  2863
  2868  2869  2870  2872  2874  2875  2876  2877
  2882  2883  2885  2888  2889  2890  2892  2893
  2897  2898  2902  2903  2904  2908  2909  2910
  2919  2922  2923  2924  2926  2928  2931  2933
  2941  2943  2947  2949  2952  2953  2954  2956
  2965  2966  2969  2970  2971  2972  2973  2974
  2977  2981  2983  2986  2989  2991  2992  2995
  2998  2999  3001  3002  3003  3008  3009  3011
  3021  3023  3024  3026  3028  3031  3032  3033
  3037
show > server -m
SERVER_ADDR           CNT MASTER BLOCK
.168.1.64:8200     64   2849  2850  2852  2856  2857  2859  2860  2862  2865  2866
  2879  2884  2886  2887  2891  2895  2899  2900
  2906  2907  2911  2912  2913  2914  2915  2917
  2925  2927  2929  2930  2932  2934  2935  2936
  2993  2994  3000  3004  3005  3006  3007  3010
  3014  3015  3016  3018  3020  3022  3025  3027
  3035  3038
.168.1.66:8202     64   2846  2847  2848  2851  2853  2854  2855  2858  2861  2863
  2868  2869  2870  2872  2874  2875  2876  2877
  2882  2883  2885  2969  2970  2971  2972  2973
  2976  2977  2981  2983  2986  2989  2991  2992
  2997  2998  2999  3001  3002  3003  3008  3009
  3019  3021  3023  3024  3026  3028  3031  3032
  3036  3037
show > machine -a
  SERVER_IP     NUMS UCAP  / TCAP =  UR  BLKCNT  LOAD TOTAL_WRITE  TOTAL_READ  LAST_WRITE  LAST_READ  MAX_WRITE   MAX_READ
--------------- ---- ------------------ -------- ---- -----------  ----------  ----------  ---------  --------  ---------
.168.1.64     1 15.16G 729.53G   2%     194  10   4.1K     0     0     0     0     0    0     0   0     0   0     0
.168.1.66     1 15.16G 729.53G   2%     194  10   4.1K     0     3     0     0     0    0     0   0     0   0     0
Total : 2          2 30.31G   1.42T   2%     388  10   8.3K     0     3     0     0     0    0     0

show > machine -p
  SERVER_IP     NUMS UCAP  / TCAP =  UR  BLKCNT  LOAD LAST_WRITE  LAST_READ  MAX_WRITE  MAX_READ STARTUP_TIME
--------------- ---- ------------------ -------- ---- ----------  ---------  ---------  -------- ------------
.168.1.64     1 15.16G 729.53G   2%     194  10     0    0     0     0     0     0    0     0 2014-08-07 18:57:11
.168.1.66     1 15.16G 729.53G   2%     194  10     0    0     0     0     0     0    0     0 2014-08-07 19:00:06
Total : 2          2 30.31G   1.42T   2%     388  10      0     0     0     0

存数据至tfs:

# /usr/local/taobaoFS/bin/tfstool -s 192.168.1.63:8108 -i "put /root/haproxy.cfg"
中间省略N行...
[2014-08-07 14:49:38] DEBUG tfs_file.cpp:803 [139796598765408] do response success. index: 0, phase: 3, ret: 0, blockid: 2866, fileid: 1, offset: 0, size: 0, crc: -1605709817, inneroffset: 0, filenumber: 4612152245717303297, status: 4, rserver: 192.168.1.64:8200, wserver: 192.168.1.64:8200.
put /root/haproxy.cfg => T1YybTByJT1RCvBVdK success.
[2014-08-07 14:49:39] INFO  transport.cpp:460 [139796598765408] DELIOC, IOCount:1, IOC:0x26076b0
[2014-08-07 14:49:39] DEBUG socket.cpp:122 [139796598765408] 1?? fd=4, addr=192.168.1.63:8108
[2014-08-07 14:49:39] INFO  transport.cpp:460 [139796598765408] DELIOC, IOCount:0, IOC:0x2609460
[2014-08-07 14:49:39] DEBUG socket.cpp:122 [139796598765408] 1?? fd=6, addr=192.168.1.64:8200

从tfs取数据:

# /usr/local/taobaoFS/bin/tfstool -s 192.168.1.63:8108 -i "get T1YybTByJT1RCvBVdK /root/test/haproxy.cfg"
中间省略N行...
[2014-08-07 15:27:27] DEBUG tfs_file.cpp:225 [139699370776416] file read reach end, offset: 0, size: 1048576
fetch T1YybTByJT1RCvBVdK => /root/test/haproxy.cfg success.
[2014-08-07 15:27:28] INFO  transport.cpp:460 [139699370776416] DELIOC, IOCount:1, IOC:0xb6e6b0
[2014-08-07 15:27:28] DEBUG socket.cpp:122 [139699370776416] 1?? fd=4, addr=192.168.1.63:8108
[2014-08-07 15:27:28] INFO  transport.cpp:460 [139699370776416] DELIOC, IOCount:0, IOC:0xb70680
[2014-08-07 15:27:28] DEBUG socket.cpp:122 [139699370776416] 1?? fd=6, addr=192.168.1.66:8202

存放大文件至tfs:

# /usr/local/taobaoFS/bin/tfstool -s 192.168.1.63:8108 -i "putL /root/CentOS-6.5-x86_64-minimal-freewaf-1.2.1-release_21152.iso"
中间省略N行...
put /root/CentOS-6.5-x86_64-minimal-freewaf-1.2.1-release_21152.iso => L1YybTByZT1RCvBVdK success.
[2014-08-07 17:03:50] INFO  transport.cpp:460 [140184226105184] DELIOC, IOCount:2, IOC:0x194c6b0
[2014-08-07 17:03:50] DEBUG socket.cpp:122 [140184226105184] 1?? fd=4, addr=192.168.1.63:8108
[2014-08-07 17:03:50] INFO  transport.cpp:460 [140184226105184] DELIOC, IOCount:1, IOC:0x194e970
[2014-08-07 17:03:50] DEBUG socket.cpp:122 [140184226105184] 1?? fd=8, addr=192.168.1.66:8202
[2014-08-07 17:03:50] INFO  transport.cpp:460 [140184226105184] DELIOC, IOCount:0, IOC:0x194f930
[2014-08-07 17:03:50] DEBUG socket.cpp:122 [140184226105184] 1?? fd=9, addr=192.168.1.64:8200

取大文件:

# /usr/local/taobaoFS/bin/tfstool -s 192.168.1.63:8108 -i "get L1YybTByZT1RCvBVdK /data/CentOS-6.5-x86_64-minimal-freewaf-1.2.1-release_21152.iso"
[2014-08-07 17:08:14] DEBUG tfs_file.cpp:201 [139878759466848] file read reach end, offset: 645128192, size: 0
fetch L1YybTByZT1RCvBVdK => /data/CentOS-6.5-x86_64-minimal-freewaf-1.2.1-release_21152.iso success.
[2014-08-07 17:08:15] INFO  transport.cpp:460 [139878759466848] DELIOC, IOCount:2, IOC:0xc5a6b0
[2014-08-07 17:08:15] DEBUG socket.cpp:122 [139878759466848] 1?? fd=4, addr=192.168.1.63:8108
[2014-08-07 17:08:15] INFO  transport.cpp:460 [139878759466848] DELIOC, IOCount:1, IOC:0xc5c7b0
[2014-08-07 17:08:15] DEBUG socket.cpp:122 [139878759466848] 1?? fd=6, addr=192.168.1.64:8200
[2014-08-07 17:08:15] INFO  transport.cpp:460 [139878759466848] DELIOC, IOCount:0, IOC:0xc63790
[2014-08-07 17:08:15] DEBUG socket.cpp:122 [139878759466848] 1?? fd=7, addr=192.168.1.66:8202

对比结果:

[root@IP-1-63 ~]# ll -h /root/
total 828M
-rw-------. 1 root root 1.1K Jul 31 10:48 anaconda-ks.cfg
-rw-r--r--. 1 root root 3.5M Jul 31 14:16 Atlas-2.1.el6.x86_64.rpm
-rw-r--r--  1 root root 616M Aug  3 13:27 CentOS-6.5-x86_64-minimal-freewaf-1.2.1-release_21152.iso
-rw-r--r--. 1 root root 3.1K Jul 31 13:53 haproxy.cfg
-rw-r--r--. 1 root root 8.4K Jul 31 10:48 install.log
-rw-r--r--. 1 root root 3.4K Jul 31 10:47 install.log.syslog
-rw-r--r--  1 root root  51M Jun 19 06:05 Ipart.apk
-rw-r--r--  1 root root 137M Mar 22 14:47 lmnp.tar.gz
-rw-r--r--  1 root root  22M Nov 13  2013 Percona-Server-5.5.31-rel30.3.tar.gz
drwxr-xr-x  2 root root 4.0K Aug  7 15:27 test

[root@IP-1-63 ~]# ll -h test/
total 3.5M
-rw-r--r-- 1 root root 3.5M Aug  7 15:26 Atlas-2.1.el6.x86_64.rpm
-rw-r--r-- 1 root root 3.1K Aug  7 15:27 haproxy.cfg

[root@IP-1-63 ~]# ll  -h /data/
total 616M
-rw-r--r--  1 root root 616M Aug  7 17:08 CentOS-6.5-x86_64-minimal-freewaf-1.2.1-release_21152.iso
drwx------. 2 root root  16K Jul 31 10:45 lost+found

TFS(Taobao FileSystem)是一个高可扩展、高可用、高性能、面向互联网服务的分布式文件系统,其设计目标是支持海量的非结构化数据。 目前,国内自主研发的文件系统可谓凤毛麟角。淘宝在这一领域做了有效的探索和实践,Taobao File System(TFS)作为淘宝内部使用的分布式文件系统,针对海量小文件的随机读写访问性能做了特殊优化,承载着淘宝主站所有图片、商品描述等数据存储。 文章首先概括了TFS的特点:最近,淘宝核心系统团队工程师楚材(李震)在其官方博客上撰文(《TFS简介》,以下简称文章)简要介绍了TFS系统的基本情况,引起了社区的关注。 完全扁平化的数据组织结构,抛弃了传统文件系统的目录结构。 在块设备基础上建立自有的文件系统,减少EXT3等文件系统数据碎片带来的性能损耗。 单进程管理单块磁盘的方式,摒除RAID5机制。 带有HA机制的中央控制节点,在安全稳定和性能复杂度之间取得平衡。 尽量缩减元数据大小,将元数据全部加载入内存,提升访问速度。 跨机架和IDC的负载均衡和冗余安全策略。 完全平滑扩容。 当前,TFS在淘宝的应用规模达到“数百台PCServer,PB级数据量,百亿数据级别”,对于其性能参数,楚材透漏: TFS在淘宝的部署环境中前端有两层缓冲,到达TFS系统的请求非常离散,所以TFS内部是没有任何数据的内存缓冲的,包括传统文件系统的内存缓冲也不存在......基本上我们可以达到单块磁盘随机IOPS(即I/O per second)理论最大值的60%左右,整机的输出随盘数增加而线性增加。 TFS的逻辑架构图1如下所示: 图1. TFS逻辑架构图(来源:淘宝核心系统团队博客) 楚材结合架构图做了进一步说明: TFS尚未对最终用户提供传统文件系统API,需要通过TFSClient进行接口访问,现有JAVA、JNI、C、PHP的客户端 TFS的NameServer作为中心控制节点,监控所有数据节点的运行状况,负责读写调度的负载均衡,同时管理一级元数据用来帮助客户端定位需要访问的数据节点 TFS的DataServer作为数据节点,负责数据实际发生的负载均衡和数据冗余,同时管理二级元数据帮助客户端获取真实的业务数据。 标签:分布式  阿里巴巴
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值