hdfs常用命令

一、hdfs常用命令
fs, fs=hdfs=dfs
version,查看版本
jar ,运行一个jar到 yarn上
checknative,查看hadoop对于压缩的支持,cloudera提供的cdh默认不支持压缩,得进行源代码编译
classpath,hadoop运行的过程中需要加载哪些jar包,
假如某一服务的进程无法启动,当 hadoop classpath的时候显示该进程未追加进来,或者额外的自定义的jar、lib包未添加,hadoop classpath会加载一些jar包的路径,第三方的jar包需要额外添加的,将该jar包放在对应的路径。

Usage: hadoop [--config confdir] COMMAND
       where COMMAND is one of:      
  fs                   run a generic filesystem user client
  version              print the version
  jar <jar>            run a jar file
  checknative [-a|-h]  check native hadoop and compression libraries availability
  distcp <srcurl> <desturl> copy file or directories recursively
  archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
  classpath            prints the class path needed to get the
  credential           interact with credential providers
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
  trace                view and modify Hadoop tracing settings
 or
  CLASSNAME            run the class named CLASSNAME

Most commands print help when invoked w/o parameters.

(1)checknative,不支持压缩的需要进行编译

[hadoop@hadoop001 hadoop]$ hadoop checknative
19/07/12 18:10:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Native library checking:
hadoop:  false 
zlib:    false 
snappy:  false 
lz4:     false 
bzip2:   false 
openssl: false 
19/07/12 18:10:52 INFO util.ExitUtil: Exiting with status 1

(2)classpath

[hadoop@hadoop001 hadoop]$ hadoop classpath
/home/hadoop/software/hadoop-2.6.0-cdh5.7.0/etc/hadoop:/home/hadoop/software/hadoop-2.6.0-cdh5.7.0/share/hadoop/common/lib/*:/home/hadoop/software/hadoop-2.6.0-cdh5.7.0/share/hadoop/common/*:/home/hadoop/software/hadoop-2.6.0-cdh5.7.0/share/hadoop/hdfs:/home/hadoop/software/hadoop-2.6.0-cdh5.7.0/share/hadoop/hdfs/lib/*:/home/hadoop/software/hadoop-2.6.0-cdh5.7.0/share/hadoop/hdfs/*:/home/hadoop/software/hadoop-2.6.0-cdh5.7.0/share/hadoop/yarn/lib/*:/home/hadoop/software/hadoop-2.6.0-cdh5.7.0/share/hadoop/yarn/*:/home/hadoop/software/hadoop-2.6.0-cdh5.7.0/share/hadoop/mapreduce/lib/*:/home/hadoop/software/hadoop-2.6.0-cdh5.7.0/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar

二、HDFS的常用命令

[hadoop@hadoop001 ~]$ hdfs dfs -ls
[hadoop@hadoop001 ~]$ hdfs dfs -cat
[hadoop@hadoop001 ~]$ hdfs dfs chmod -R
[hadoop@hadoop001 ~]$ hdfs dfs chown -R
[hadoop@hadoop001 ~]$ hdfs dfs -put
[hadoop@hadoop001 ~]$ hdfs dfs -mv
[hadoop@hadoop001 ~]$ hdfs dfs -text
# -format 是属于高危参数
# fsck查看文件系统的健康状态

举例说明:从根目录检查,显示状态为健康

[hadoop@hadoop001 hadoop]$ hdfs fsck /
19/07/12 19:56:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connecting to namenode via http://hadoop001:50070
FSCK started by hadoop (auth:SIMPLE) from /192.168.0.130 for path / at Fri Jul 12 19:56:34 CST 2019
..Status: HEALTHY
 Total size:    54 B
 Total dirs:    8
 Total files:   2
 Total symlinks:                0
 Total blocks (validated):      2 (avg. block size 27 B)
 Minimally replicated blocks:   2 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    1
 Average block replication:     1.0
 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)
 Number of data-nodes:          1
 Number of racks:               1
FSCK ended at Fri Jul 12 19:56:34 CST 2019 in 33 milliseconds


The filesystem under path '/' is HEALTHY

如果状态显示非健康,则执行以下操作
cd sbin
一般启动的是start-dfs.sh在启动的过程中会判断已存在的进程
真正底层调用的是hadoop-daemon.sh
–>【hadoop-daemon.sh】(说明:根据command调用HDFS。执行相关操作)

[hadoop@hadoop001 sbin]$ ./hadoop-daemon.sh start datanode
# jps——datanode启动了

如果是用shell脚本监控,输出到fsck.log会对日志进行采集判断问题或预警

[hadoop@hadoop001 ~]$ hdfs fsck / >fsck.log

删除损坏的文件
修复损坏的文件请参看块损坏解决方式的文章

[hadoop@hadoop001 ~]$ hdfs fsck -delete /
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值