hdfs文件3个副本BLK的查找


 本文介绍了如何使用hadoop fsck命令来检查HDFS文件的副本分布情况,通过示例展示了如何找到文件的BLK块以及它们在不同DataNode上的存储位置。


开始部署hdfs的时候,文件冗余3份。那么1个文件分拆成那些BLK,分别存储在那里呢?


hadoop fsck <需要找的文件名> -files -blocks -locations 

#######################

[root@master ~]# hadoop fsck --help
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.


Usage: DFSck <path> [-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]]]
        <path>  start checking from this path
        -move   move corrupted files to /lost+found
        -delete delete corrupted files
        -files  print out files being checked
        -openforwrite   print out files opened for write
        -includeSnapshots       include snapshot data if the given path indicates a snapshottable directory or there are snapshottable directories under it
        -list-corruptfileblocks print out list of missing blocks and files they belong to
        -blocks print out block report
        -locations      print out locations for every block
        -racks  print out network topology for data-node locations

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值