hadoop验证map/reduce失败,ERROR security.UserGroupInformation: PriviledgedAc

在Hadoop2集群中运行WordCount示例时遇到初始化集群错误,尽管ResourceManager、NameNode、NodeManager和DataNode均已成功启动。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

在 hadoop2集群 master1上, 执行 HADOOP自带的例子, wordcount 包,命令如下
cd /usr/local/hadoop/hadoop-2.0.2-alpha/share/hadoop/mapreduce
hadoop jar hadoop-mapreduce-examples-2.0.2-alpha.jar wordcount hdfs://192.168.1.33:9000/input hdfs://192.168.1.33:9000/output
出现错误:
13/06/24 05:48:06 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:121)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:83)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:76)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1188)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1184)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1183)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1212)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1236)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
但是我通过start-all.sh启动之后,jps查看 master机器上:ResourceManager和NameNode都已启动,slave机器上NodeManager和DataNode也已经启动了,查看启动日志均无报错,但是验证map/reduce时就报这个错误,请指点迷津,不胜感谢
An internal error occurred during: "Map/Reduce location status updater".java.lang.NullPointerException at org.apache.hadoop.mapred.JobClient.getAllJobs(JobClient.java:814) at org.apache.hadoop.mapred.JobClient.jobsToComplete(JobClient.java:790) at org.apache.hadoop.eclipse.server.HadoopServer$LocationStatusUpdater.run(HadoopServer.java:119) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)java.io.IOException: failure to login at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:796) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:748) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:621) at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:81) at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75) at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470) at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:449) at org.apache.hadoop.eclipse.server.HadoopServer.getJobClient(HadoopServer.java:488) at org.apache.hadoop.eclipse.server.HadoopServer$LocationStatusUpdater.run(HadoopServer.java:103) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Caused by: javax.security.auth.login.LoginException: No LoginModule found for com.sun.security.auth.module.NTLoginModule at java.base/javax.security.auth.login.LoginContext.invoke(LoginContext.java:731) at java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:672) at java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:670) at java.base/java.security.AccessController.doPrivileged(AccessController.java:688) at java.base/javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:670) at java.base/javax.security.auth.login.LoginContext.login(LoginContext.java:581) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:771) ... 9 more
04-02
[root@master ~]# hadoop jar "/root/software/hadoop-3.1.3/share/hadoop/tools/lib/hadoop-streaming-3.1.3.jar" -file "/usr/bin/python3" "/root/csv_python_code/my_mapper_csv.py" -mapper "/root/csv_python_code/my_mapper_csv.py" -file /usr/bin/python3 "/root/csv_python_code/my_reducer_csv.py" -reducer "/root/csv_python_code/my_reducer_csv.py" -input /my_input_csv/* -output /my_output.csv 2025-06-11 18:58:58,183 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead. packageJobJar: [/usr/bin/python3, /root/csv_python_code/my_mapper_csv.py, /usr/bin/python3, /root/csv_python_code/my_reducer_csv.py, /tmp/hadoop-unjar1612883193461920012/] [] /tmp/streamjob7378925739115730847.jar tmpDir=null 2025-06-11 18:58:59,167 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.30.110:8032 2025-06-11 18:58:59,389 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.30.110:8032 2025-06-11 18:58:59,779 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1744881937486_0004 2025-06-11 18:58:59,893 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:00,007 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:00,435 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:00,466 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:00,908 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:01,415 INFO mapred.FileInputFormat: Total input files to process : 1 2025-06-11 18:59:01,456 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:01,909 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:01,927 INFO mapreduce.JobSubmitter: number of splits:2 2025-06-11 18:59:02,066 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-11 18:59:02,114 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1744881937486_0004 2025-06-11 18:59:02,114 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2025-06-11 18:59:02,343 INFO conf.Configuration: resource-types.xml not found 2025-06-11 18:59:02,344 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2025-06-11 18:59:02,437 INFO impl.YarnClientImpl: Submitted application application_1744881937486_0004 2025-06-11 18:59:02,489 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1744881937486_0004/ 2025-06-11 18:59:02,491 INFO mapreduce.Job: Running job: job_1744881937486_0004 2025-06-11 18:59:09,693 INFO mapreduce.Job: Job job_1744881937486_0004 running in uber mode : false 2025-06-11 18:59:09,696 INFO mapreduce.Job: map 0% reduce 0% 2025-06-11 18:59:13,846 INFO mapreduce.Job: Task Id : attempt_1744881937486_0004_m_000000_0, Status : FAILED Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:325) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:538) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) 2025-06-11 18:59:14,874 INFO mapreduce.Job: map 50% reduce 0% 2025-06-11 18:59:18,949 INFO mapreduce.Job: Task Id : attempt_1744881937486_0004_m_000000_1, Status : FAILED Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:325) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:538) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) 2025-06-11 18:59:21,984 INFO mapreduce.Job: Task Id : attempt_1744881937486_0004_m_000000_2, Status : FAILED Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:325) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:538) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) 2025-06-11 18:59:24,034 INFO mapreduce.Job: Task Id : attempt_1744881937486_0004_r_000000_0, Status : FAILED [2025-06-11 18:59:22.157]Container [pid=72527,containerID=container_1744881937486_0004_01_000005] is running 537983488B beyond the 'VIRTUAL' memory limit. Current usage: 168.3 MB of 1 GB physical memory used; 2.6 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1744881937486_0004_01_000005 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 72527 72525 72527 72527 (bash) 0 1 116002816 303 /bin/bash -c /usr/local/jdk1.8.0_391/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1744881937486_0004/container_1744881937486_0004_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/software/hadoop-3.1.3/logs/userlogs/application_1744881937486_0004/container_1744881937486_0004_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dyarn.app.mapreduce.shuffle.logger=INFO,shuffleCLA -Dyarn.app.mapreduce.shuffle.logfile=syslog.shuffle -Dyarn.app.mapreduce.shuffle.log.filesize=0 -Dyarn.app.mapreduce.shuffle.log.backups=0 org.apache.hadoop.mapred.YarnChild 192.168.30.110 45262 attempt_1744881937486_0004_r_000000_0 5 1>/root/software/hadoop-3.1.3/logs/userlogs/application_1744881937486_0004/container_1744881937486_0004_01_000005/stdout 2>/root/software/hadoop-3.1.3/logs/userlogs/application_1744881937486_0004/container_1744881937486_0004_01_000005/stderr |- 72539 72527 72527 72527 (java) 432 50 2676838400 42782 /usr/local/jdk1.8.0_391/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1744881937486_0004/container_1744881937486_0004_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/software/hadoop-3.1.3/logs/userlogs/application_1744881937486_0004/container_1744881937486_0004_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dyarn.app.mapreduce.shuffle.logger=INFO,shuffleCLA -Dyarn.app.mapreduce.shuffle.logfile=syslog.shuffle -Dyarn.app.mapreduce.shuffle.log.filesize=0 -Dyarn.app.mapreduce.shuffle.log.backups=0 org.apache.hadoop.mapred.YarnChild 192.168.30.110 45262 attempt_1744881937486_0004_r_000000_0 5 [2025-06-11 18:59:22.166]Container killed on request. Exit code is 143 [2025-06-11 18:59:22.179]Container exited with a non-zero exit code 143. 2025-06-11 18:59:28,176 INFO mapreduce.Job: map 100% reduce 100% 2025-06-11 18:59:29,197 INFO mapreduce.Job: Job job_1744881937486_0004 failed with state FAILED due to: Task failed task_1744881937486_0004_m_000000 Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0 2025-06-11 18:59:29,306 INFO mapreduce.Job: Counters: 43 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=352126 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=1771275 HDFS: Number of bytes written=0 HDFS: Number of read operations=3 HDFS: Number of large read operations=0 HDFS: Number of write operations=0 Job Counters Failed map tasks=4 Failed reduce tasks=1 Killed map tasks=1 Killed reduce tasks=1 Launched map tasks=6 Launched reduce tasks=1 Other local map tasks=4 Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=16751 Total time spent by all reduces in occupied slots (ms)=7052 Total time spent by all map tasks (ms)=16751 Total time spent by all reduce tasks (ms)=7052 Total vcore-milliseconds taken by all map tasks=16751 Total vcore-milliseconds taken by all reduce tasks=7052 Total megabyte-milliseconds taken by all map tasks=17153024 Total megabyte-milliseconds taken by all reduce tasks=7221248 Map-Reduce Framework Map input records=10009 Map output records=10009 Map output bytes=110099 Map output materialized bytes=130123 Input split bytes=92 Combine input records=0 Spilled Records=10009 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=160 CPU time spent (ms)=1030 Physical memory (bytes) snapshot=289234944 Virtual memory (bytes) snapshot=2799288320 Total committed heap usage (bytes)=209715200 Peak Map Physical memory (bytes)=289234944 Peak Map Virtual memory (bytes)=2799288320 File Input Format Counters Bytes Read=1771183 2025-06-11 18:59:29,308 ERROR streaming.StreamJob: Job not successful! Streaming Command Failed! [root@master ~]#
最新发布
06-12
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值