运行wordcount的时候显示INFO mapreduce.Job: map 0% reduce 0%

本文记录了一次Hadoop作业运行失败的经历及解决过程。作者通过调整虚拟机内存配置,将每台虚拟机的内存从512M增加到1G,成功解决了因内存不足导致的作业运行失败问题。

错误提示:

[xiaoqiu@s150 /home/xiaoqiu]$ hadoop jar wordcounter.jar com.cr.wordcount.WordcountApp hdfs://s150/user/xiaoqiu/data/wc.txt hdfs://s150/user/xiaoqiu/data/out
18/01/05 09:12:52 INFO client.RMProxy: Connecting to ResourceManager at s150/192.168.109.150:8032
18/01/05 09:12:55 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
18/01/05 09:13:13 INFO input.FileInputFormat: Total input paths to process : 1
18/01/05 09:13:18 INFO mapreduce.JobSubmitter: number of splits:1
18/01/05 09:13:21 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1515157734609_0002
18/01/05 09:14:14 INFO impl.YarnClientImpl: Submitted application application_1515157734609_0002
18/01/05 09:14:16 INFO mapreduce.Job: The url to track the job: http://s150:8088/proxy/application_1515157734609_0002/
18/01/05 09:14:16 INFO mapreduce.Job: Running job: job_1515157734609_0002
18/01/05 09:20:09 INFO mapreduce.Job: Job job_1515157734609_0002 running in uber mode : false
18/01/05 09:20:16 INFO mapreduce.Job:  map 0% reduce 0%
解决:

修改了虚拟机的内存大小,我之前是每个虚拟机配置的512M,后来改成了1个G,解决了问题

转载于:https://www.cnblogs.com/flyingcr/p/10326965.html

[root@hadoop01 hadoop-2.7.6]# hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.6.jar wordcount wcinput2/ wcoutput2 25/09/07 12:58:08 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/192.168.10.131:8032 25/09/07 12:58:09 INFO input.FileInputFormat: Total input paths to process : 0 25/09/07 12:58:09 INFO mapreduce.JobSubmitter: number of splits:0 25/09/07 12:58:10 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1757220240312_0003 25/09/07 12:58:10 INFO impl.YarnClientImpl: Submitted application application_1757220240312_0003 25/09/07 12:58:10 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1757220240312_0003/ 25/09/07 12:58:10 INFO mapreduce.Job: Running job: job_1757220240312_0003 25/09/07 12:58:17 INFO mapreduce.Job: Job job_1757220240312_0003 running in uber mode : true 25/09/07 12:58:17 INFO mapreduce.Job: map 0% reduce 100% 25/09/07 12:58:19 INFO mapreduce.Job: Job job_1757220240312_0003 completed successfully 25/09/07 12:58:19 INFO mapreduce.Job: Counters: 40 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=0 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=10 HDFS: Number of bytes written=130781 HDFS: Number of read operations=17 HDFS: Number of large read operations=0 HDFS: Number of write operations=7 Job Counters Launched reduce tasks=1 Total time spent by all maps in occupied slots (ms)=0 Total time spent by all reduces in occupied slots (ms)=1315 TOTAL_LAUNCHED_UBERTASKS=1 NUM_UBER_SUBREDUCES=1 Total time spent by all reduce tasks (ms)=1315 Total vcore-milliseconds taken by all reduce tasks=1315 Total megabyte-milliseconds taken by all reduce tasks=1346560 Map-Reduce Framework Combine input records=0 Combine output records=0 Reduce input groups=0 Reduce shuffle bytes=0 Reduce input records=0 Reduce output records=0 Spilled Records=0 Shuffled Maps =0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=8 CPU time spent (ms)=640 Physical memory (bytes) snapshot=331374592 Virtual memory (bytes) snapshot=3107405824 Total committed heap usage (bytes)=177209344 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Output Format Counters Bytes Written=308 [root@hadoop01 hadoop-2.7.6]# ll wcoutput2/ ls: 无法访问wcoutput2/: 没有那个文件或目录
最新发布
09-08
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值