Hadoop Command Summary

本文介绍了使用Hadoop文件系统进行文件操作的基本方法,包括查看目录、上传文件、下载文件、删除文件等常见任务。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

List file is:

$ ./hadoop fs -ls

Found 17 items

-rwxr-xr-x   1 yj70978 retailfi       1259 2013-01-30 21:05/home/yj70978/hadoop/hadoop-1.1.2/bin/start-mapred.sh

-rwxr-xr-x   1 yj70978 retailfi       2642 2013-01-30 21:05/home/yj70978/hadoop/hadoop-1.1.2/bin/hadoop-config.sh

-rwxr-xr-x   1 yj70978 retailfi       2810 2013-01-30 21:05/home/yj70978/hadoop/hadoop-1.1.2/bin/rcc

-rwxr-xr-x   1 yj70978 retailfi      14189 2013-07-22 07:58/home/yj70978/hadoop/hadoop-1.1.2/bin/hadoop

-rwxr-xr-x   1 yj70978 retailfi       1329 2013-01-30 21:05/home/yj70978/hadoop/hadoop-1.1.2/bin/hadoop-daemons.sh

-rwxr-xr-x   1 yj70978 retailfi       1145 2013-01-30 21:05 /home/yj70978/hadoop/hadoop-1.1.2/bin/start-jobhistoryserver.sh

-rwxr-xr-x   1 yj70978 retailfi       2143 2013-01-30 21:05/home/yj70978/hadoop/hadoop-1.1.2/bin/slaves.sh

-rwxr-xr-x   1 yj70978 retailfi       1116 2013-01-30 21:05/home/yj70978/hadoop/hadoop-1.1.2/bin/stop-balancer.sh

-rwxr-xr-x   1 yj70978 retailfi       1745 2013-01-30 21:05/home/yj70978/hadoop/hadoop-1.1.2/bin/start-dfs.sh

-rwxr-xr-x   1 yj70978 retailfi       1168 2013-01-30 21:05/home/yj70978/hadoop/hadoop-1.1.2/bin/stop-mapred.sh

-rwxr-xr-x   1 yj70978 retailfi       1246 2013-01-30 21:05/home/yj70978/hadoop/hadoop-1.1.2/bin/stop-dfs.sh

-rwxr-xr-x   1 yj70978 retailfi       1166 2013-01-30 21:05/home/yj70978/hadoop/hadoop-1.1.2/bin/start-all.sh

-rwxr-xr-x   1 yj70978 retailfi       1119 2013-01-30 21:05/home/yj70978/hadoop/hadoop-1.1.2/bin/stop-all.sh

-rwxr-xr-x   1 yj70978 retailfi      63970 2013-01-30 21:06/home/yj70978/hadoop/hadoop-1.1.2/bin/task-controller

-rwxr-xr-x   1 yj70978 retailfi       1065 2013-01-30 21:05/home/yj70978/hadoop/hadoop-1.1.2/bin/start-balancer.sh

-rwxr-xr-x   1 yj70978 retailfi       1131 2013-01-30 21:05/home/yj70978/hadoop/hadoop-1.1.2/bin/stop-jobhistoryserver.sh

-rwxr-xr-x   1 yj70978 retailfi       4649 2013-01-30 21:05/home/yj70978/hadoop/hadoop-1.1.2/bin/hadoop-daemon.sh

HDFS has a defaultworking directory of /user/$USER, where $USER is your login user name. Thisdirectory isn’t automatically created for you, though, so let’s create it withthe mkdir command.

List all ofsubdirectories,

hadoop fs -lsr/user

Copy file from localfilesystem to HDFS,

$ hadoop fs -put1.txt /user/mz50947

                No encryption was performed bypeer.

$ hadoop fs -ls/user/mz50947

                No encryption was performed bypeer.

Found 2 items

-rw-r--r--   3 mz50947 enterpriserisk          0 2013-07-23 01:39 /user/mz50947/1

-rw-r--r--   3 mz50947 enterpriserisk          0 2013-07-23 01:40/user/mz50947/1.txt

Delete a file inHDFS,

$ hadoop fs -rm/user/mz50947/1.txt

                No encryption was performed bypeer.

Moved:'hdfs://bdwar001m01l.nam.nsroot.net:8020/user/mz50947/1.txt' to trash at:hdfs://bdwar001m01l.nam.nsroot.net:8020/user/mz50947/.Trash/Current

See content of afile,

hadoop fs -cat/user/mz50947/1.txt

                No encryption was performed bypeer.

Touch

Get a file fromHDFS to local filesystem,

hadoop fs -get/user/mz50947/1.txt .

Looking up help,

hadoop fs –help

[root@hadoop04 ~]# hadoop jar film1.jar CleanDriver -D mapreduce.job.summary.log.level=DEBUG /film/input /film/outputs/cleandata 25/03/15 14:57:08 INFO client.RMProxy: Connecting to ResourceManager at hadoop04/192.168.100.104:8032 25/03/15 14:57:08 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 25/03/15 14:57:10 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/root/.staging/job_1742021469568_0001 Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://hadoop04:9000/user/root/CleanDriver at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:323) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at CleanDriver.main(CleanDriver.java:35) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:226) at org.apache.hadoop.util.RunJar.main(RunJar.java:141) [root@hadoop04 ~]#
最新发布
03-16
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值