hadoop distcp hdfs://pc1:8020/user/uar/receive/click/ (文件夹) hdfs://pc4:8020/user/uar/receive/click/(文件夹) 如果是同步文件,把文件夹换文件就好了 18/10/30 16:23:01 INFO impl.TimelineClientImpl: Timeline service address: http://dev-bdp-node-03:8188/ws/v1/timeline/ 18/10/30 16:23:01 INFO client.RMProxy: Connecting to ResourceManager at dev-bdp-node-03/10.30.5.82:8050 18/10/30 16:23:02 INFO impl.TimelineClientImpl: Timeline service address: http://dev-bdp-node-03:8188/ws/v1/timeline/ 18/10/30 16:23:02 INFO client.RMProxy: Connecting to ResourceManager at dev-bdp-node-03/10.30.5.82:8050 18/10/30 16:23:03 INFO mapreduce.JobSubmitter: number of splits:2 18/10/30 16:23:03 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1537237069924_2182 18/10/30 16:23:03 INFO impl.YarnClientImpl: Submitted application application_1537237069924_2182 18/10/30 16:23:03 INFO mapreduce.Job: The url to track the job: http://dev-bdp-node-03:8088/proxy/application_1537237069924_2182/ 18/10/30 16:23:03 INFO tools.DistCp: DistCp job-id: job_1537237069924_2182 18/10/30 16:23:03 INFO mapreduce.Job: Running job: job_1537237069924_2182 18/10/30 16:23:38 INFO mapreduce.Job: Job job_1537237069924_2182 running in uber mode : false 18/10/30 16:23:38 INFO mapreduce.Job: map 0% reduce 0% 18/10/30 16:23:44 INFO mapreduce.Job: map 100% reduce 0% 18/10/30 16:23:44 INFO mapreduce.Job: Job job_1537237069924_2182 completed successfully 18/10/30 16:23:45 INFO mapreduce.Job: Counters: 33 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=272846 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=4590 HDFS: Number of bytes written=3711 HDFS: Number of read operations=28 HDFS: Number of large read operations=0 HDFS: Number of write operations=7 Job Counters Launched map tasks=2 Other local map tasks=2 Total time spent by all maps in occupied slots (ms)=6661 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=6661 Total vcore-seconds taken by all map tasks=6661 Total megabyte-seconds taken by all map tasks=6820864 Map-Reduce Framework Map input records=2 Map output records=0 Input split bytes=230 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=540 CPU time spent (ms)=1310 Physical memory (bytes) snapshot=427872256 Virtual memory (bytes) snapshot=7511072768 Total committed heap usage (bytes)=371720192 File Input Format Counters Bytes Read=649 File Output Format Counters Bytes Written=0 org.apache.hadoop.tools.mapred.CopyMapper$Counter BYTESCOPIED=3711 BYTESEXPECTED=3711 COPY=2
![]() |
hdfs 之间 文件夹数据同步——来自我的QQ空间
最新推荐文章于 2025-03-05 11:06:19 发布