map 0% reduce 0% 然后卡死的解决方案

文章详细记录了解决在运行多次MapReduce任务后系统卡死及死机的问题,通过使用ubuntu下的systemmonitor发现内存占用过高及swap分区接近满载导致的问题。提出扩容swap分区的方法,包括使用dd命令创建新的swap空间、mkswap进行格式化、swapon激活swap分区,并在fstab中设置开机自启动。同时,文章建议在遇到类似问题时检查系统资源使用情况,及时扩容swap分区以避免系统故障。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

在运行几次mapreduce任务后,再启动一次会出现系统卡死的情况。。。百度到的几种方法都木有用。。。偶然发现ubuntu下有个system monitor,打开发现内存占用率很高,swap分区差不多快用光了,再运行一次map任务,当swap用光后电脑就死机了。。。重启,将swap分区从380M扩容到8G。。。妈妈再也不用担心我运行mapreduce死机了。BTW,昨晚顺便买了条4g内存。。。

打开system monitor的方法:

$ gnome-system-monitor


拓展swap分区的方法:

如何扩展/删除swap分区


背景:

由于安装Oracle 的时候,swap太小只划分了4G,后期发现交换分区太小,不满足使用,于是进行了swap分区的扩容

过程:

swap分区的扩展很简单,但是需要root用户权限

# dd if=/dev/zero of=/swap bs=1024M count=8(从/分区分出8x1024M大小的空间,挂在/swap上)

# mkswap /swap (格式化成swap格式)

# swapon /swap (激活/swap,加入到swap分区中)

# vim /etc/fstab (开机自启动新添加的swap分区)

—>添加

/swap swap swap defaults 0 0

如果不想使用需要删除,只需要执行#swapoff /swap


[root@master exp3]# hadoop jar /export/servers/hadoop-3.4.0/share/hadoop/tools/lib/hadoop-streaming-3.4.0.jar -D mapreduce.job.reduces=1 -D stream.map.output.field.separator=\t -D stream.num.map .output.key.fields=1 -mapper "python /export/data/chap-5/exp3/Mapper.py" -reducer "python /export/data/chap-5/exp3/Reducer.py" -input /exp1/input -output /exp1/output_$(date +%s) -file ./Mapper.py -file ./Reducer.py 2025-05-31 16:24:06,054 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead. packageJobJar: [./Mapper.py, ./Reducer.py, /tmp/hadoop-unjar2796039736802025778/] [] /tmp/streamjob2856802959225773798.jar tmpDir=null 2025-05-31 16:24:06,522 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at master/192.168.24.67:8032 2025-05-31 16:24:06,621 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at master/192.168.24.67:8032 2025-05-31 16:24:06,844 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1748671547381_0012 2025-05-31 16:24:07,087 INFO mapred.FileInputFormat: Total input files to process : 1 2025-05-31 16:24:07,123 INFO mapreduce.JobSubmitter: number of splits:2 2025-05-31 16:24:07,218 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1748671547381_0012 2025-05-31 16:24:07,218 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2025-05-31 16:24:07,334 INFO conf.Configuration: resource-types.xml not found 2025-05-31 16:24:07,334 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2025-05-31 16:24:07,379 INFO impl.YarnClientImpl: Submitted application application_1748671547381_0012 2025-05-31 16:24:07,404 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1748671547381_0012/ 2025-05-31 16:24:07,405 INFO mapreduce.Job: Running job: job_1748671547381_0012 2025-05-31 16:24:13,507 INFO mapreduce.Job: Job job_1748671547381_0012 running in uber mode : false 2025-05-31 16:24:13,508 INFO mapreduce.Job: map 0% reduce 0% 2025-05-31 16:24:19,780 INFO mapreduce.Job: Task Id : attempt_1748671547381_0012_m_000001_0, Status : FAILED Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:326) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:539) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:129) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:466) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172) 2025-05-31 16:24:19,816 INFO mapreduce.Job: Task Id : attempt_1748671547381_0012_m_000000_0, Status : FAILED Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:326) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:539) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:129) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:466) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172) 2025-05-31 16:24:23,906 INFO mapreduce.Job: Task Id : attempt_1748671547381_0012_m_000001_1, Status : FAILED Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:326) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:539) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:129) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:466) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172) 2025-05-31 16:24:23,912 INFO mapreduce.Job: Task Id : attempt_1748671547381_0012_m_000000_1, Status : FAILED Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:326) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:539) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:129) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:466) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172) 2025-05-31 16:24:27,978 INFO mapreduce.Job: Task Id : attempt_1748671547381_0012_m_000001_2, Status : FAILED Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:326) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:539) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:129) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:466) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172) 2025-05-31 16:24:29,025 INFO mapreduce.Job: Task Id : attempt_1748671547381_0012_m_000000_2, Status : FAILED Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:326) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:539) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:129) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:466) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172) 2025-05-31 16:24:33,127 INFO mapreduce.Job: map 50% reduce 100% 2025-05-31 16:24:34,137 INFO mapreduce.Job: map 100% reduce 100% 2025-05-31 16:24:34,145 INFO mapreduce.Job: Job job_1748671547381_0012 failed with state FAILED due to: Task failed task_1748671547381_0012_m_000001 Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0 2025-05-31 16:24:34,197 INFO mapreduce.Job: Counters: 14 Job Counters Failed map tasks=7 Killed map tasks=1 Killed reduce tasks=1 Launched map tasks=8 Other local map tasks=6 Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=23810 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=23810 Total vcore-milliseconds taken by all map tasks=23810 Total megabyte-milliseconds taken by all map tasks=24381440 Map-Reduce Framework CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 2025-05-31 16:24:34,197 ERROR streaming.StreamJob: Job not successful! Streaming Command Failed!
06-01
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值