Hadoop 1.x 版本中的Task Controller 介绍

本文详细介绍了Hadoop中的TaskController组件,对比DefaultTaskController与LinuxTaskController的安全特性,通过具体示例验证了两种TaskController在任务执行过程中的不同表现。

目前网络上对Hadoop 的 TaskController介绍的资料虽然不算少,但感觉纯谈感念的占多数,可能会使读者理解起来感到比较抽象,因此我在这里整理了一些自己对Task Controller的浅见,以及为本文设计的实例,希望能对读者起到一点帮助。

 

本文介绍Task Controller的顺序为:作用、配置、原理(由表及里、由浅入深)。另外,本文的代码是基于ApacheHadoop-1.1.1。

作用

关于Task Controller的作用,在org.apache.hadoop.mapred.TaskController.java的类说明信息中提到:“Controls initialization, finalization and clean up oftasks, and also the launching and killing of task JVMs”。从此处不难看出Task Controller的作用:控制task的初始化、终止以及执行环境清理,同时控制运行task所需的JVMs的启动和终止。简单说,它控制了一个task具体怎么被执行,这很好理解。TaskController.java只是一个abstract class,而它的具体实现类有两个:DefaultTaskController和LinuxTaskController.java:


DefaultTaskController和LinuxTaskController.java有什么区别?他们最主要的区别主要是从安全性上考虑的——以哪个user的身份操作OS的文件(请注意:不是针对HDFS,而是OS):

-DefaultTaskController:无论哪个用户提交Job,所有的task都以task tracker daemon的拥有者的身份运行。这种情况下,Job的owner还是Job的提交者,并不会因此变成task tracker daemon的拥有者,而且Job对HDFS的操作也是以Job的提交者的身份进行的。但是,task中所有对OS的操作都是以task tracker daemon的拥有者的身份运行,而task tracker daemon的拥有者往往在OS上拥有较高的权限,这样就带来了较大的安全隐患:提交Job的用户,可以利用tasktracker daemon的拥有者的身份权限来对OS做出较大的破坏行为。

- LinuxTaskController:无论哪个用户提交Job,所有的task都以提交Job的用户的身份运行。与DefaultTaskController相同的是, Job的owner是Job的提交者,并且Job对HDFS的操作也是以Job的提交者的身份进行。但不同的是,task中所有对OS的操作都是以Job的提交者的身份运行,这样就屏蔽了DefaultTaskController所拥有的安全隐患。

 

接下来,通过一个具体的例子来验证以上的结论。

首先,设计一个简单的MapReduce Job:该Job没有指定的Reducer,只指定一个名为FOMapper的Mapper。FOMapper的作用是将一个HDFS上的文件拷贝到本地OS的/home/tom文件夹下,而/home/tom的拥有者为用户tom。因此,非tom用户和非特权用户并不能在/home/tom目录下创建文件。测试所用Hadoop cluster的tasktracker daemon的拥有者为tom 用户。

 

本MapReduce Job的代码如下:

import java.io.IOException;

 

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.FileSystem;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.Mapper;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

 

publicclass TaskControllerTest {

 

    publicstaticvoid copyFromHDFSToLocal(Configuration conf)

           throws IOException {

       FileSystem hdfs = FileSystem.get(conf);

       Path srcPath = new Path("/user/tom/test.txt");

       Path dstPath = new Path("/home/tom/test.txt");

       hdfs.copyToLocalFile(srcPath, dstPath);

    }

 

    publicstaticclass FOMapperextends

           Mapper<Object, Text, Text, LongWritable> {

 

       @Override

       publicvoid map(Object key, Text value, Context context)

              throws IOException, InterruptedException {

           copyFromHDFSToLocal(context.getConfiguration());

           context.write(value, new LongWritable(1));

       }

 

    }

 

    /**

     * @param args

     * @throws IOException

     * @throwsClassNotFoundException

     * @throwsInterruptedException

     */

    publicstaticvoid main(String[] args)throws IOException,InterruptedException, ClassNotFoundException {

       Configuration conf = new Configuration();

       Job job = new Job(conf,"FileOperations Job");

 

       job.setMapperClass(FOMapper.class);

       job.setJarByClass(FOMapper.class);

      

       job.setMapOutputKeyClass(Text.class);

       job.setMapOutputValueClass(LongWritable.class);

 

       FileInputFormat.setInputPaths(job,new Path(args[0]));

       FileOutputFormat.setOutputPath(job,new Path(args[1]));

 

       System.exit(job.waitForCompletion(true) ? 0 : 1);

    }

 

}

 

把此MapReduce Job打包,然后分别在以下四种情况下运行,并验证结果(用户可以自己在本地进行类似测试):

a) Hadoop cluster配置使用DefaultTaskController(具体配置方法请参考下文“配置”部分),然后以用户tom来提交Job。

结果:Job的拥有者为tom,并成功运行,且能够将HDFS上的文件成功拷贝到OS的/home/tom文件夹下面,而且该文件在OS的拥有者是tom。

 

b) Hadoop cluster配置使用DefaultTaskController,然后以用户jerry(非tom用户和非特权用户)来提交Job。

结果:Job的拥有者为jerry,并成功运行,且能够将HDFS上的文件成功拷贝到OS的/home/tom文件夹下面,而且该文件在OS的拥有者并非jerry,而是tom。

 

c) Hadoop cluster配置使用LinuxTaskController(具体配置方法请参考下文“配置”部分),然后以用户tom来提交Job。

结果:Job的拥有者为tom,并成功运行,且能够将HDFS上的文件成功拷贝到OS的/home/tom文件夹下面,而且该文件在OS的拥有者是tom。

 

d) Hadoop cluster配置使用LinuxTaskController,然后以用户jerry(非tom用户和非特权用户)来提交Job。

结果:Job的拥有者为jerry,但Job执行失败,并抛出“Permission Denied”的异常——这个结果是能够被预料到的,并且是正确的:在使用LinuxTaskController的时候,任何对OS的文件操作都是以Job的提交者身份来进行。所以当Job的提交者是jerry时,在task试图以jerry用户的身份向/home/tom目录下写文件时就会碰到“PermissionDenied”的错误。这也规避了DefaultTaskController所拥有的安全风险。

 

从上面的试验不难看出LinuxTaskController比DefaultTaskController拥有更高的安全性,因此也推荐用户在Hadoop cluster中使用LinuxTaskController。

 

配置

A. 配置使用DefaultTaskController:

无需做任何配置,因为Hadoop默认使用的TaskController就是DefaultTaskController。

 

B. 配置使用LinuxTaskController:

a) task-controller:

这是一个setuid的可执行二进制文件,Hadoop用它来以提交Job的用户身份来执行task。Hadoop自带该文件,它默认的位置为${HADOOP_HOME}/bin/ task-controller,而且所属用户必须是root,所属用户组必须由参数mapreduce.tasktracker.group指定(在mapred-site.xml和task-controller.cfg中指定)。另外,它的读写权限一般是4754 或者-rwsr-xr--。

 

b) mapred-site.xml:

在mapred-site.xml中添加如下配置:

  <property>

   <name>mapred.task.tracker.task-controller</name>

   <value>org.apache.hadoop.mapred.LinuxTaskController</value>

  </property>

  <property>

   <name>mapreduce.tasktracker.group</name>

    <value>tom</value>

  </property>

注意:

- mapred.task.tracker.task-controller:指定Hadoop选用的Task Contoller类型。默认是org.apache.hadoop.mapred.DefaultTaskController。当使用LinuxTaskController时,需填org.apache.hadoop.mapred.LinuxTaskController。

- mapreduce.tasktracker.group:定义task-controller 文件所属的用户组。如果task-controller的用户组不符合该设定,则会在启动hadoop task tracker的时候就会抛出错误:“[ERROR] tasktracker failed to start”。

 

c) task-controller.cfg:

保存task controller的具体配置。该文件默认的位置是的${HADOOP_HOME}/conf/ task-controller.cfg,但其位置也可由环境变量HADOOP_SECURITY_CONF_DIR显示地定义:“export HADOOP_SECURITY_CONF_DIR=/var/task-controller-conf”。taskcontroller.cfg 文件的属性为400。其内容一般如下:

mapred.tasktracker.tasks.sleeptime-before-sigkill=#sleeptime before sig kill is to be sent to process group after sigterm is sent.Should be in seconds

hadoop.log.dir=/var/hadoop/logs

mapred.local.dir=/hadoop/mapred/local

mapreduce.tasktracker.group=hadoop

min.user.id=100

banned.users=foo,bar

说明:

- min.user.id:主要是用来限制一些高权限的用户执行task

- banned.users:用来限制一些特定的用户执行task

另外,Hadoop要求HADOOP_SECURITY_CONF_DIR路径所包含的各级目录仅能对root用户可写。一般情况下,HADOOP_SECURITY_CONF_DIR路径所包含的各级目录的所属用户为root、所属用户组保持和TaskTracker的用户组一致,而目录的属性可设置为755。

 

原理

知道了以上Task Controller的功能后,我们再来探究它的实现原理就容易很多。不过本文不打算事无巨细地罗列代码及代码间的调用关系(因为这些可以在代码中很容易地找到),只谈几个笔者认为值得一提的要点:

A. Hadoop框架中的什么地方调用过Task Controller?

在Task执行的各个阶段,与Task执行相关的很多地方都会调用Task Controller的不同方法。比如TaskTracker.java、TaskRunner.java、JvmManager.java与UserLogManager.java等。

 

B. DefaultTaskController.java与LinuxTaskController.java实现上的主要不同?

抽象类TaskController.java中定义了一些抽象方法,主要的是setup()、initializeJob()、launchTask()、signalTask()、createLogDir()以及deleteAsUser()等。DefaultTaskController.java与LinuxTaskController.java分别对这些方法进行了自己的实现。而实现上主要的不同是:

- 在DefaultTaskController.java中,其方法里面的各种操作主要是直接调用Hadoop FileSystem API或者是Java IO API去完成,也就是由DefaultTaskController.java自己实现具体的行为。比如deleteAsUser()方法:

  publicvoiddeleteAsUser(Stringuser,

                           String subDir)throws IOException {

    String dir = TaskTracker.getUserDir(user)+ Path.SEPARATOR + subDir;

    for(Path fullDir:allocator.getAllLocalPathsToRead(dir, getConf())) {

      fs.delete(fullDir,true);

    }

  }

- 在LinuxTaskController.java中,其具体的操作主要靠调用task-controller这个可执行二进制文件里的方法来实现。以下依然拿deleteAsUser()方法来举例:

  // Path tothesetuid executable.

  private StringtaskControllerExe;

  privatestaticfinal StringTASK_CONTROLLER_EXEC_KEY =

    "mapreduce.tasktracker.task-controller.exe";

 

  @Override

  publicvoid setConf(Configuration conf) {

    super.setConf(conf);

    FilehadoopBin = new File(System.getenv("HADOOP_HOME"),"bin");

    String defaultTaskController =

        new File(hadoopBin,"task-controller").getAbsolutePath();

    taskControllerExe = conf.get(TASK_CONTROLLER_EXEC_KEY,

                                 defaultTaskController);      

  }

... ...

 

  @Override

  publicvoiddeleteLogAsUser(Stringuser, String subDir)throws IOException {

    String[] command =

      new String[]{taskControllerExe,

                   user,

                   localStorage.getDirsString(),

                   Integer.toString(Commands.DELETE_LOG_AS_USER.getValue()),

                   subDir};

    ShellCommandExecutor shExec =newShellCommandExecutor(command);

    if (LOG.isDebugEnabled()) {

      LOG.debug("deleteLogAsUser: " + Arrays.toString(command));

    }

    shExec.execute();

  }

 

C. 可执行二进制文件task-controller是怎么生成的?

task-controller由c++代码生成,具体的代码位于Hadoop工程下src/c++/task-controller文件夹下,如下图所示:



task-controller的核心代码在蓝色方框内,有兴趣的读者可以自己看看这些代码。这里主要对两个最重要的文件做个简介:

- main.c:task-controller的入口程序,接受、检验命令行命令,并调用task-controller.c里面所实现的方法。

- task-controller.c:task-controller所需方法的具体实现。它里面定义了大量的方法:


值得一提的是,用户可以根据需要,通过自己改写、丰富task-controller.c、task-controller.h以及main.c等代码文件来拓展task-controller的功能。笔者就曾经通过改写task-controller来解决了Hadoop的一个问题,并把代码贡献回给了Apache Hadoop社区(https://issues.apache.org/jira/browse/MAPREDUCE-4490)。

 

 

2025-06-21 13:57:16,366 INFO client.DefaultNoHARMFailoverProxyProvider: Connect ing to ResourceManager at Masterxd243234025/192.168.40.25:8032 2025-06-21 13:57:16,459 INFO client.DefaultNoHARMFailoverProxyProvider: Connect ing to ResourceManager at Masterxd243234025/192.168.40.25:8032 2025-06-21 13:57:16,698 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your ap plication with ToolRunner to remedy this. 2025-06-21 13:57:16,733 INFO mapreduce.JobResourceUploader: Disabling Erasure C oding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1750408986691_0005 2025-06-21 13:57:16,944 INFO mapred.FileInputFormat: Total input files to proce ss : 1 2025-06-21 13:57:17,002 INFO mapreduce.JobSubmitter: number of splits:2 2025-06-21 13:57:17,133 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1750408986691_0005 2025-06-21 13:57:17,133 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2025-06-21 13:57:17,890 INFO conf.Configuration: resource-types.xml not found 2025-06-21 13:57:17,890 INFO resource.ResourceUtils: Unable to find 'resource-t ypes.xml'. 2025-06-21 13:57:17,944 INFO impl.YarnClientImpl: Submitted application applica tion_1750408986691_0005 2025-06-21 13:57:17,964 INFO mapreduce.Job: The url to track the job: http://Ma sterXD243234025:8088/proxy/application_1750408986691_0005/ 2025-06-21 13:57:17,965 INFO mapreduce.Job: Running job: job_1750408986691_0005 2025-06-21 13:57:23,092 INFO mapreduce.Job: Job job_1750408986691_0005 running in uber mode : false 2025-06-21 13:57:23,093 INFO mapreduce.Job: map 0% reduce 0% 2025-06-21 13:57:28,292 INFO mapreduce.Job: Task Id : attempt_1750408986691_0005_m_000001_0, Status : FAILED Error: java.util.NoSuchElementException: iterate past last value at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1625) at org.apache.hadoop.mapred.Task$CombineValuesIterator.next(Task.java:1710) at com.xd.Reduce.reduce(Reduce.java:21) at com.xd.Reduce.reduce(Reduce.java:10) at org.apache.hadoop.mapred.Task$OldCombinerRunner.combine(Task.java:1839) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1665) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1506) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:473) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172) 2025-06-21 13:57:28,346 INFO mapreduce.Job: Task Id : attempt_1750408986691_0005_m_000000_0, Status : FAILED Error: java.util.NoSuchElementException: iterate past last value at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1625) at org.apache.hadoop.mapred.Task$CombineValuesIterator.next(Task.java:1710) at com.xd.Reduce.reduce(Reduce.java:21) at com.xd.Reduce.reduce(Reduce.java:10) at org.apache.hadoop.mapred.Task$OldCombinerRunner.combine(Task.java:1839) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1665) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1506) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:473) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172) 2025-06-21 13:57:31,396 INFO mapreduce.Job: Task Id : attempt_1750408986691_0005_m_000000_1, Status : FAILED Error: java.util.NoSuchElementException: iterate past last value at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1625) at org.apache.hadoop.mapred.Task$CombineValuesIterator.next(Task.java:1710) at com.xd.Reduce.reduce(Reduce.java:21) at com.xd.Reduce.reduce(Reduce.java:10) at org.apache.hadoop.mapred.Task$OldCombinerRunner.combine(Task.java:1839) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1665) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1506) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:473) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172) 2025-06-21 13:57:31,399 INFO mapreduce.Job: Task Id : attempt_1750408986691_0005_m_000001_1, Status : FAILED Error: java.util.NoSuchElementException: iterate past last value at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1625) at org.apache.hadoop.mapred.Task$CombineValuesIterator.next(Task.java:1710) at com.xd.Reduce.reduce(Reduce.java:21) at com.xd.Reduce.reduce(Reduce.java:10) at org.apache.hadoop.mapred.Task$OldCombinerRunner.combine(Task.java:1839) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1665) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1506) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:473) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172) 2025-06-21 13:57:34,474 INFO mapreduce.Job: Task Id : attempt_1750408986691_0005_m_000000_2, Status : FAILED Error: java.util.NoSuchElementException: iterate past last value at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1625) at org.apache.hadoop.mapred.Task$CombineValuesIterator.next(Task.java:1710) at com.xd.Reduce.reduce(Reduce.java:21) at com.xd.Reduce.reduce(Reduce.java:10) at org.apache.hadoop.mapred.Task$OldCombinerRunner.combine(Task.java:1839) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1665) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1506) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:473) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172) 2025-06-21 13:57:34,477 INFO mapreduce.Job: Task Id : attempt_1750408986691_0005_m_000001_2, Status : FAILED Error: java.util.NoSuchElementException: iterate past last value at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1625) at org.apache.hadoop.mapred.Task$CombineValuesIterator.next(Task.java:1710) at com.xd.Reduce.reduce(Reduce.java:21) at com.xd.Reduce.reduce(Reduce.java:10) at org.apache.hadoop.mapred.Task$OldCombinerRunner.combine(Task.java:1839) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1665) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1506) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:473) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172) 2025-06-21 13:57:38,534 INFO mapreduce.Job: map 100% reduce 100% 2025-06-21 13:57:38,553 INFO mapreduce.Job: Job job_1750408986691_0005 failed with state FAILED due to: Task failed task_1750408986691_0005_m_000000 Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0 2025-06-21 13:57:38,654 INFO mapreduce.Job: Counters: 11 Job Counters Failed map tasks=7 Killed map tasks=1 Killed reduce tasks=1 Launched map tasks=8 Other local map tasks=6 Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=16733 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=16733 Total vcore-milliseconds taken by all map tasks=16733 Total megabyte-milliseconds taken by all map tasks=17134592 Exception in thread "main" java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:876) at com.xd.WordCount.main(WordCount.java:24) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:323) at org.apache.hadoop.util.RunJar.main(RunJar.java:236) 什么错误怎么解决
06-22
Error: org.apache.hadoop.mapreduce.counters.LimitExceededException: Too many counters: 121 max=120 at org.apache.hadoop.mapreduce.counters.Limits.checkCounters(Limits.java:101) at org.apache.hadoop.mapreduce.counters.Limits.incrCounters(Limits.java:108) at org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounter(AbstractCounterGroup.java:78) at org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounterImpl(AbstractCounterGroup.java:95) at org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounterImpl(AbstractCounterGroup.java:123) at org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:113) at org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:130) at org.apache.hadoop.mapred.Counters$Group.findCounter(Counters.java:369) at org.apache.hadoop.mapred.Counters$Group.getCounterForName(Counters.java:314) at org.apache.hadoop.mapred.Counters.findCounter(Counters.java:479) at org.apache.hadoop.mapred.Task$TaskReporter.getCounter(Task.java:689) at org.apache.hadoop.mapred.Task$TaskReporter.getCounter(Task.java:633) at org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.getCounter(TaskAttemptContextImpl.java:76) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.getCounter(WrappedMapper.java:101) at com.sina.ud.map.keyword.GameKeywordsMapMapper.map(GameKeywordsMapMapper.java:124) at com.sina.ud.map.keyword.GameKeywordsMapMapper.map(GameKeywordsMapMapper.java:27) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at com.sina.hadoop.DelegatingMapper.run(DelegatingMapper.java:34) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:243) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1750) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:237) 这段mr报错原因是啥
07-11
org.apache.hadoop.security.AccessControlException: Permission denied: user=prod_sec_strategy_tech, access=EXECUTE, inode="/user/prod_dd_sec_dw/dd_sec_dw/hive/dd_sec_dw/dim_trip_sec_account_logout_detail_df":dream:dd_sec_dw_0_dim_trip_sec_account_logout_detail_df:drwxr-x--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:606) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1866) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1884) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:720) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:114) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3402) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1257) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:972) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2716) at sun.reflect.GeneratedConstructorAccessor17.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2178) at org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1390) at org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1386) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1402) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1494) at org.apache.hadoop.hive.ql.exec.Utilities.isEmptyPath(Utilities.java:2699) at org.apache.hadoop.hive.ql.exec.Utilities.isEmptyPath(Utilities.java:2693) at org.apache.hadoop.hive.ql.exec.Utilities.getInputPaths(Utilities.java:3408) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:372) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:75) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=prod_sec_strategy_tech, access=EXECUTE, inode="/user/prod_dd_sec_dw/dd_sec_dw/hive/dd_sec_dw/dim_trip_sec_account_logout_detail_df":dream:dd_sec_dw_0_dim_trip_sec_account_logout_detail_df:drwxr-x--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:606) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1866) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1884) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:720) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:114) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3402) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1257) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:972) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2716) at org.apache.hadoop.ipc.Client.call(Client.java:1503) at org.apache.hadoop.ipc.Client.call(Client.java:1441) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:778) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:253) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101) at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2176) ... 13 more Job Submission failed with exception 'org.apache.hadoop.security.AccessControlException(Permission denied: user=prod_sec_strategy_tech, access=EXECUTE, inode="/user/prod_dd_sec_dw/dd_sec_dw/hive/dd_sec_dw/dim_trip_sec_account_logout_detail_df":dream:dd_sec_dw_0_dim_trip_sec_account_logout_detail_df:drwxr-x--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:606) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1866) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1884) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:720) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:114) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3402) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1257) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:972) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2716) )' Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> org.apache.hadoop.security.AccessControlException: Permission denied: user=prod_sec_strategy_tech, access=EXECUTE, inode="/user/prod_dd_sec_app/dd_sec_app/hive/dd_sec_app/dm_trip_sec_dri_ban_di":dream:dd_sec_app_0_dm_trip_sec_dri_ban_di:drwxr-x--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:606) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1866) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1884) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:720) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:114) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3402) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1257) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:972) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2716) at sun.reflect.GeneratedConstructorAccessor17.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2178) at org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1390) at org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1386) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1402) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1494) at org.apache.hadoop.hive.ql.exec.Utilities.isEmptyPath(Utilities.java:2699) at org.apache.hadoop.hive.ql.exec.Utilities.isEmptyPath(Utilities.java:2693) at org.apache.hadoop.hive.ql.exec.Utilities.getInputPaths(Utilities.java:3408) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:372) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:75) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=prod_sec_strategy_tech, access=EXECUTE, inode="/user/prod_dd_sec_app/dd_sec_app/hive/dd_sec_app/dm_trip_sec_dri_ban_di":dream:dd_sec_app_0_dm_trip_sec_dri_ban_di:drwxr-x--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:606) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1866) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1884) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:720) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:114) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3402) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1257) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:972) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2716) at org.apache.hadoop.ipc.Client.call(Client.java:1503) at org.apache.hadoop.ipc.Client.call(Client.java:1441) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:778) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:253) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101) at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2176) ... 13 more Job Submission failed with exception 'org.apache.hadoop.security.AccessControlException(Permission denied: user=prod_sec_strategy_tech, access=EXECUTE, inode="/user/prod_dd_sec_app/dd_sec_app/hive/dd_sec_app/dm_trip_sec_dri_ban_di":dream:dd_sec_app_0_dm_trip_sec_dri_ban_di:drwxr-x--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:606) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1866) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1884) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:720) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:114) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3402) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1257) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:972) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2716) )' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask java.lang.RuntimeException: Interuppted at org.apache.hadoop.hive.common.HiveInterruptUtils.checkInterrupted(HiveInterruptUtils.java:67) at org.apache.hadoop.hive.ql.exec.Utilities.getInputSummary(Utilities.java:2638) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.setNumberOfReducers(MapRedTask.java:409) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:93) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:75) Caused by: java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hive.common.HiveInterruptUtils.checkInterrupted(HiveInterruptUtils.java:63) ... 6 more java.lang.RuntimeException: Interuppted at org.apache.hadoop.hive.common.HiveInterruptUtils.checkInterrupted(HiveInterruptUtils.java:67) at org.apache.hadoop.hive.ql.exec.Utilities.getInputSummary(Utilities.java:2638) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.setNumberOfReducers(MapRedTask.java:409) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:93) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:75) Caused by: java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hive.common.HiveInterruptUtils.checkInterrupted(HiveInterruptUtils.java:63) ... 6 more
最新发布
11-08
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值