首先我们来查看报错信息:
Ended Job = job_1639620592561_0002 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1639620592561_0002_m_000000 (and more) from job job_1639620592561_0002
Task with the most failures(4):
-----
Task ID:
task_1639620592561_0002_m_000000
URL:
http://hadoop101:8088/taskdetails.jsp?jobid=job_1639620592561_0002&tipid=task_1639620592561_0002_m_000000
-----
Diagnostic Messages for this Task:
[2021-12-16 10:56:22.545]Container [pid=2417,containerID=container_1639620592561_0002_01_000005] is running 263662080B beyond the 'VIRTUAL' memory limit. Current usage: 95.8 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1639620592561_0002_01_000005 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 2429 2417 2417 2417 (java) 532 22 2508722176 24

博客内容讲述了在执行Hadoop MapReduce(MR)任务时遇到的资源效率过高问题,具体表现为虚拟机内存不足。错误信息显示由于内存超出限制,导致任务被杀死。提供了解决方案,包括设置Hive为本地模式或调整虚拟机堆内存。
最低0.47元/天 解锁文章
1201

被折叠的 条评论
为什么被折叠?



