Hadoop初入门的坑

本文记录了作者初学Hadoop时遇到的环境配置问题,包括namenode和datanode启动失败,8088及19888端口页面无法打开,以及在启动historyserver运行MapReduce任务时的错误。强调了解决问题的关键在于查看日志,并表达了通过博客分享学习经验的愿望。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

这几天初入门hadoop,测试一些demo遇到了许多问题,有些问题想了好久才找到办法解决,因此写下这篇入坑小博客,学习编程,解决问题是我们的核心,所以我们遇到难题应该兴奋并解决之。以后我要写多点博客,记录编程的点点滴滴,愿与各位共勉之。下面,正题。

!!!hadoop中遇到问题一定要找相应的日志!!!

一、环境

1、一个linux系统(Centos7)的服务器(不是虚拟机),下面贴出/etc/hosts配置文件图


2、JDK1.7.0_80

3、hadoop-2.5.0

二、一些问题

1、namenode启动不了
namenode启动不了,打开日志,发现日志报如下错误:
2018-01-07 00:37:13,992 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /opt/hadoop-2.5.0/data/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:311)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:955)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:529)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:751)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:735)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1407)

at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)

原因:日志里面已经说得很清楚了,就是一个目录不存在导致的。我当时出现这个错误是因为第二次格式化namenode时出现了下面datanode启动不了的错误,于是把数据目录下的数据全删了,把namenode相关的数据也删除了,从而出现了这个错误。
解决办法:重新格式化namenode。
注意:我们重新格式化namenode时,应该先把数据目录下的数据删除了再格式化,避免出现datanode启动不了(下面第2点错误),然后解决的时候又引起此错误。
2、datanode启动不了
datanode启动不了,打开日志,发现日志报如下错误:
2018-01-06 23:59:12,597 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:8020. Exiting. 
java.io.IOException: Incompatible clusterIDs in /opt/hadoop-2.5.0/data/tmp/dfs/data: namenode clusterID = CID-e4f699ab-9239-46b5-b03d-7ca00eca68c8; datanode clusterID = CID-586a76e9-b54a-4108-aa6a-b4002789a837
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:477)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:226)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:254)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:975)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:946)
	at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:812)
	at java.lang.Thread.run(Thread.java:745)

2018-01-06 23:59:12,598 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:8020

原因:你曾经格式化过一次namenode,但是在后来的操作中又格式化了一次namenode但并没有清空datanode下的数据,第二次格式化的namenode和之前的datanode对不上,所以datanode启动失败。
解决办法:(1)把保存数据的目录中的类容删掉就OK了。保存数据的目录默认是tmp/dfs/data。
(2)可以在core-site.xml文件中重新配置保存数据的目录。

3、8088端口的页面打不开

8088端口页面打开不了,但是可以mapreduce任务。是不是感觉到很奇怪。
原因:yarn-site.xml配置文件中的yarn.resourcemanager.hostname属性配置问题。如下图为打不开时的配置:

localhost对应的IP地址为127.0.0.1,因此不管在浏览器上打开localhost:8088还是主机IP地址:8088(我的主机IP地址为123.207.254.33,在linux下用ifconfig命令可以得出)都无法打开。
解决办法:更改配置:
或者

其中主机名xiancheng3对应的IP地址为本机IP地址(123.207.254.33),主机名xiancheng2对应的IP地址为0.0.0.0。

4、historyserver历史服务器19888端口页面打不开

原因:mapred-site.xml文件中的mapreduce.jobhistory.webapp.address属性配置错误,如下:

解决办法:重新配置:
或者

xiancheng2其实就是hadoop默认的地址。不过还是用自己的IP地址比较好。

4、开启historyserver历史服务器运行mapreduce任务出错

这个问题让我想了很久,我不开historyserver服务,运行wordcount这个任务发现可以成功,但一旦开启了historyserver这个东西就报错,而且我的historyserver配置是没有错的,这让人很奇怪。下面是控制台的输出信息:


可以发现,没有找到什么效的信息,还要再说一次,出错了一定要看各种日志,只有日志会有很多详细的信息。
查看resourcemanager的日志信息发现没什么问题:
2018-01-07 02:19:05,166 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting ResourceManager
STARTUP_MSG:   host = xiancheng/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.5.0
STARTUP_MSG:   classpath = /opt/hadoop-2.5.0/etc/hadoop:/opt/hadoop-2.5.0/etc/hadoop:/opt/hadoop-2.5.0/etc/hadoop:/opt/hadoop-2.5.0/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/junit-4.11.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/hadoop-annotations-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/hadoop-auth-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-el-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-2.5.0/share/hadoop/common/hadoop-common-2.5.0-tests.jar:/opt/hadoop-2.5.0/share/hadoop/common/hadoop-common-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/hadoop-nfs-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/hadoop-hdfs-2.5.0-tests.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/hadoop-hdfs-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-common-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-client-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-api-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-common-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.0-tests.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-common-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-client-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-api-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-common-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-2.5.0/etc/hadoop/rm-config/log4j.properties
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1616291; compiled by 'jenkins' on 2014-08-06T17:31Z
STARTUP_MSG:   java = 1.7.0_80
************************************************************/
2018-01-07 02:19:05,184 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT]
2018-01-07 02:19:05,526 INFO org.apache.hadoop.conf.Configuration: found resource core-site.xml at file:/opt/hadoop-2.5.0/etc/hadoop/core-site.xml
2018-01-07 02:19:05,816 INFO org.apache.hadoop.security.Groups: clearing userToGroupsMap cache
2018-01-07 02:19:05,825 INFO org.apache.hadoop.conf.Configuration: found resource yarn-site.xml at file:/opt/hadoop-2.5.0/etc/hadoop/yarn-site.xml
2018-01-07 02:19:05,981 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher
2018-01-07 02:19:06,461 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: NMTokenKeyRollingInterval: 86400000ms and NMTokenKeyActivationDelay: 900000ms
2018-01-07 02:19:06,470 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: ContainerTokenKeyRollingInterval: 86400000ms and ContainerTokenKeyActivationDelay: 900000ms
2018-01-07 02:19:06,474 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Rolling master-key for amrm-tokens
2018-01-07 02:19:06,512 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStoreEventType for class org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler
2018-01-07 02:19:06,854 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.NodesListManagerEventType for class org.apache.hadoop.yarn.server.resourcemanager.NodesListManager
2018-01-07 02:19:06,854 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Using Scheduler: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
2018-01-07 02:19:07,054 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher
2018-01-07 02:19:07,056 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher
2018-01-07 02:19:07,068 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher
2018-01-07 02:19:07,069 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$NodeEventDispatcher
2018-01-07 02:19:07,191 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2018-01-07 02:19:07,288 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2018-01-07 02:19:07,288 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ResourceManager metrics system started
2018-01-07 02:19:07,439 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.RMAppManagerEventType for class org.apache.hadoop.yarn.server.resourcemanager.RMAppManager
2018-01-07 02:19:07,448 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncherEventType for class org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher
2018-01-07 02:19:07,451 INFO org.apache.hadoop.yarn.server.resourcemanager.RMNMInfo: Registered RMNMInfo MBean
2018-01-07 02:19:07,454 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.ahs.WritingHistoryEventType for class org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler
2018-01-07 02:19:07,454 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.ahs.WritingHistoryEventType for class org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler
2018-01-07 02:19:07,454 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.ahs.WritingHistoryEventType for class org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler
2018-01-07 02:19:07,454 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.ahs.WritingHistoryEventType for class org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler
2018-01-07 02:19:07,454 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.ahs.WritingHistoryEventType for class org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler
2018-01-07 02:19:07,455 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.ahs.WritingHistoryEventType for class org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler
2018-01-07 02:19:07,455 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.ahs.WritingHistoryEventType for class org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler
2018-01-07 02:19:07,455 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.ahs.WritingHistoryEventType for class org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler
2018-01-07 02:19:07,455 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.ahs.WritingHistoryEventType for class org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler
2018-01-07 02:19:07,455 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.ahs.WritingHistoryEventType for class org.apache.hadoop.yarn.server.resourcemanager.ahs.RMApplicationHistoryWriter$ForwardingEventHandler
2018-01-07 02:19:07,456 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2018-01-07 02:19:07,477 INFO org.apache.hadoop.conf.Configuration: found resource capacity-scheduler.xml at file:/opt/hadoop-2.5.0/etc/hadoop/capacity-scheduler.xml
2018-01-07 02:19:07,653 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: root, capacity=1.0, asboluteCapacity=1.0, maxCapacity=1.0, asboluteMaxCapacity=1.0, state=RUNNING, acls=SUBMIT_APPLICATIONS:*ADMINISTER_QUEUE:*
2018-01-07 02:19:07,653 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Initialized parent-queue root name=root, fullname=root
2018-01-07 02:19:07,663 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Initializing default
capacity = 1.0 [= (float) configuredCapacity / 100 ]
asboluteCapacity = 1.0 [= parentAbsoluteCapacity * capacity ]
maxCapacity = 1.0 [= configuredMaxCapacity ]
absoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCapacity * maximumCapacity) / 100 otherwise ]
userLimit = 100 [= configuredUserLimit ]
userLimitFactor = 1.0 [= configuredUserLimitFactor ]
maxApplications = 10000 [= configuredMaximumSystemApplicationsPerQueue or (int)(configuredMaximumSystemApplications * absoluteCapacity)]
maxApplicationsPerUser = 10000 [= (int)(maxApplications * (userLimit / 100.0f) * userLimitFactor) ]
maxActiveApplications = 1 [= max((int)ceil((clusterResourceMemory / minimumAllocation) * maxAMResourcePerQueuePercent * absoluteMaxCapacity),1) ]
maxActiveAppsUsingAbsCap = 1 [= max((int)ceil((clusterResourceMemory / minimumAllocation) *maxAMResourcePercent * absoluteCapacity),1) ]
maxActiveApplicationsPerUser = 1 [= max((int)(maxActiveApplications * (userLimit / 100.0f) * userLimitFactor),1) ]
usedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCapacity)]
absoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory]
maxAMResourcePerQueuePercent = 0.1 [= configuredMaximumAMResourcePercent ]
minimumAllocationFactor = 0.875 [= (float)(maximumAllocationMemory - minimumAllocationMemory) / maximumAllocationMemory ]
numContainers = 0 [= currentNumContainers ]
state = RUNNING [= configuredState ]
acls = SUBMIT_APPLICATIONS:*ADMINISTER_QUEUE:* [= configuredAcls ]
nodeLocalityDelay = 40

2018-01-07 02:19:07,663 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=0
2018-01-07 02:19:07,663 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue: root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=0
2018-01-07 02:19:07,663 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized root queue root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=0
2018-01-07 02:19:07,664 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized CapacityScheduler with calculator=class org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator, minimumAllocation=<<memory:1024, vCores:1>>, maximumAllocation=<<memory:8192, vCores:32>>, asynchronousScheduling=false, asyncScheduleInterval=5ms
2018-01-07 02:19:07,670 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to active state
2018-01-07 02:19:07,707 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: Rolling master-key for container-tokens
2018-01-07 02:19:07,709 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Rolling master-key for amrm-tokens
2018-01-07 02:19:07,721 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Rolling master-key for nm-tokens
2018-01-07 02:19:07,721 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2018-01-07 02:19:07,722 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: storing master key with keyID 1
2018-01-07 02:19:07,728 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
2018-01-07 02:19:07,728 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2018-01-07 02:19:07,729 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: storing master key with keyID 2
2018-01-07 02:19:08,028 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2018-01-07 02:19:08,044 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8031
2018-01-07 02:19:08,068 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceTrackerPB to the server
2018-01-07 02:19:08,069 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-01-07 02:19:08,070 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8031: starting
2018-01-07 02:19:08,113 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2018-01-07 02:19:08,121 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8030
2018-01-07 02:19:08,135 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB to the server
2018-01-07 02:19:08,136 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-01-07 02:19:08,136 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8030: starting
2018-01-07 02:19:08,215 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2018-01-07 02:19:08,216 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8032
2018-01-07 02:19:08,221 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationClientProtocolPB to the server
2018-01-07 02:19:08,221 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-01-07 02:19:08,222 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8032: starting
2018-01-07 02:19:08,240 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to active state
2018-01-07 02:19:08,418 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2018-01-07 02:19:08,421 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.resourcemanager is not defined
2018-01-07 02:19:08,441 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2018-01-07 02:19:08,464 INFO org.apache.hadoop.http.HttpServer2: Added filter YARNAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context cluster
2018-01-07 02:19:08,464 INFO org.apache.hadoop.http.HttpServer2: Added filter YARNAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context static
2018-01-07 02:19:08,464 INFO org.apache.hadoop.http.HttpServer2: Added filter YARNAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context logs
2018-01-07 02:19:08,469 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context cluster
2018-01-07 02:19:08,469 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2018-01-07 02:19:08,469 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2018-01-07 02:19:08,478 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /cluster/*
2018-01-07 02:19:08,478 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2018-01-07 02:19:08,537 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 8088
2018-01-07 02:19:08,537 INFO org.mortbay.log: jetty-6.1.26
2018-01-07 02:19:08,595 INFO org.mortbay.log: Extract jar:file:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-common-2.5.0.jar!/webapps/cluster to /tmp/Jetty_xiancheng2_8088_cluster____.dmd23f/webapp
2018-01-07 02:19:09,094 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret
2018-01-07 02:19:09,233 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret
2018-01-07 02:19:09,234 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret
2018-01-07 02:19:09,239 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@xiancheng2:8088
2018-01-07 02:19:09,239 INFO org.apache.hadoop.yarn.webapp.WebApps: Web app /cluster started at 8088
2018-01-07 02:19:09,845 INFO org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2018-01-07 02:19:09,875 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2018-01-07 02:19:09,878 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8033
2018-01-07 02:19:09,881 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceManagerAdministrationProtocolPB to the server
2018-01-07 02:19:09,886 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-01-07 02:19:09,886 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8033: starting
2018-01-07 02:20:30,797 INFO org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to /default-rack
2018-01-07 02:20:30,869 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: localhost:46399 Node Transitioned from NEW to RUNNING
2018-01-07 02:20:30,872 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node localhost:46399 clusterResource: <memory:8192, vCores:8>
2018-01-07 02:20:30,874 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node localhost(cmPort: 46399 httpPort: 8042) registered with capability: <memory:8192, vCores:8>, assigned nodeId localhost:46399
2018-01-07 02:20:52,767 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 1
2018-01-07 02:20:54,531 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 1 submitted by user root
2018-01-07 02:20:54,532 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1515262747671_0001
2018-01-07 02:20:54,534 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1515262747671_0001 State change from NEW to NEW_SAVING
2018-01-07 02:20:54,543 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1515262747671_0001
2018-01-07 02:20:54,544 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1515262747671_0001 State change from NEW_SAVING to SUBMITTED
2018-01-07 02:20:54,545 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1515262747671_0001 user: root leaf-queue of parent: root #applications: 1
2018-01-07 02:20:54,546 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1515262747671_0001 from user: root, in queue: default
2018-01-07 02:20:54,556 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1515262747671_0001 State change from SUBMITTED to ACCEPTED
2018-01-07 02:20:54,557 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1515262747671_0001_000001
2018-01-07 02:20:54,561 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1515262747671_0001_000001 State change from NEW to SUBMITTED
2018-01-07 02:20:54,562 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	IP=127.0.0.1	OPERATION=Submit Application Request	TARGET=ClientRMService	RESULT=SUCCESS	APPID=application_1515262747671_0001
2018-01-07 02:20:54,604 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1515262747671_0001 from user: root activated in queue: default
2018-01-07 02:20:54,605 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1515262747671_0001 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@55d9370a, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2018-01-07 02:20:54,605 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1515262747671_0001_000001 to scheduler from user root in queue default
2018-01-07 02:20:54,606 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1515262747671_0001_000001 State change from SUBMITTED to SCHEDULED
2018-01-07 02:20:55,131 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000001 Container Transitioned from NEW to ALLOCATED
2018-01-07 02:20:55,132 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000001
2018-01-07 02:20:55,132 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1515262747671_0001_01_000001 of capacity <memory:2048, vCores:1> on host localhost:46399, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation
2018-01-07 02:20:55,132 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1515262747671_0001_000001 container=Container: [ContainerId: container_1515262747671_0001_01_000001, NodeId: localhost:46399, NodeHttpAddress: localhost:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2018-01-07 02:20:55,132 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2018-01-07 02:20:55,132 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2018-01-07 02:20:55,135 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : localhost:46399 for container : container_1515262747671_0001_01_000001
2018-01-07 02:20:55,138 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2018-01-07 02:20:55,138 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1515262747671_0001_000001
2018-01-07 02:20:55,141 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1515262747671_0001 AttemptId: appattempt_1515262747671_0001_000001 MasterContainer: Container: [ContainerId: container_1515262747671_0001_01_000001, NodeId: localhost:46399, NodeHttpAddress: localhost:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 127.0.0.1:46399 }, ]
2018-01-07 02:20:55,144 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1515262747671_0001_000001 State change from SCHEDULED to ALLOCATED_SAVING
2018-01-07 02:20:55,153 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1515262747671_0001_000001 State change from ALLOCATED_SAVING to ALLOCATED
2018-01-07 02:20:55,159 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1515262747671_0001_000001
2018-01-07 02:20:55,207 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1515262747671_0001_01_000001, NodeId: localhost:46399, NodeHttpAddress: localhost:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 127.0.0.1:46399 }, ] for AM appattempt_1515262747671_0001_000001
2018-01-07 02:20:55,207 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1515262747671_0001_01_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA  -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 
2018-01-07 02:20:55,551 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1515262747671_0001_01_000001, NodeId: localhost:46399, NodeHttpAddress: localhost:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 127.0.0.1:46399 }, ] for AM appattempt_1515262747671_0001_000001
2018-01-07 02:20:55,551 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1515262747671_0001_000001 State change from ALLOCATED to LAUNCHED
2018-01-07 02:20:56,152 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000001 Container Transitioned from ACQUIRED to RUNNING
2018-01-07 02:21:04,261 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1515262747671_0001_000001 (auth:SIMPLE)
2018-01-07 02:21:04,313 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1515262747671_0001_000001
2018-01-07 02:21:04,317 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	IP=127.0.0.1	OPERATION=Register App Master	TARGET=ApplicationMasterService	RESULT=SUCCESS	APPID=application_1515262747671_0001	APPATTEMPTID=appattempt_1515262747671_0001_000001
2018-01-07 02:21:04,322 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1515262747671_0001_000001 State change from LAUNCHED to RUNNING
2018-01-07 02:21:04,322 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1515262747671_0001 State change from ACCEPTED to RUNNING
2018-01-07 02:21:06,257 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000002 Container Transitioned from NEW to ALLOCATED
2018-01-07 02:21:06,257 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000002
2018-01-07 02:21:06,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1515262747671_0001_01_000002 of capacity <memory:1024, vCores:1> on host localhost:46399, which has 2 containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6> available after allocation
2018-01-07 02:21:06,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1515262747671_0001_000001 container=Container: [ContainerId: container_1515262747671_0001_01_000002, NodeId: localhost:46399, NodeHttpAddress: localhost:8042, Resource: <memory:1024, vCores:1>, Priority: 20, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 clusterResource=<memory:8192, vCores:8>
2018-01-07 02:21:06,258 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=2
2018-01-07 02:21:06,258 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:3072, vCores:2> cluster=<memory:8192, vCores:8>
2018-01-07 02:21:06,578 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : localhost:46399 for container : container_1515262747671_0001_01_000002
2018-01-07 02:21:06,579 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000002 Container Transitioned from ALLOCATED to ACQUIRED
2018-01-07 02:21:07,270 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000002 Container Transitioned from ACQUIRED to RUNNING
2018-01-07 02:21:07,606 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: checking for deactivate... 
2018-01-07 02:21:11,321 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000002 Container Transitioned from RUNNING to COMPLETED
2018-01-07 02:21:11,321 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1515262747671_0001_01_000002 in state: COMPLETED event:FINISHED
2018-01-07 02:21:11,321 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000002
2018-01-07 02:21:11,321 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1515262747671_0001_01_000002 of capacity <memory:1024, vCores:1> on host localhost:46399, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available, release resources=true
2018-01-07 02:21:11,321 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:2048, vCores:1> numContainers=1 user=root user-resources=<memory:2048, vCores:1>
2018-01-07 02:21:11,322 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1515262747671_0001_01_000002, NodeId: localhost:46399, NodeHttpAddress: localhost:8042, Resource: <memory:1024, vCores:1>, Priority: 20, Token: Token { kind: ContainerToken, service: 127.0.0.1:46399 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 cluster=<memory:8192, vCores:8>
2018-01-07 02:21:11,322 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2018-01-07 02:21:11,322 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2018-01-07 02:21:11,322 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1515262747671_0001_000001 released container container_1515262747671_0001_01_000002 on node: host: localhost:46399 #containers=1 available=6144 used=2048 with event: FINISHED
2018-01-07 02:21:13,327 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000003 Container Transitioned from NEW to ALLOCATED
2018-01-07 02:21:13,327 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000003
2018-01-07 02:21:13,327 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1515262747671_0001_01_000003 of capacity <memory:1024, vCores:1> on host localhost:46399, which has 2 containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6> available after allocation
2018-01-07 02:21:13,327 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1515262747671_0001_000001 container=Container: [ContainerId: container_1515262747671_0001_01_000003, NodeId: localhost:46399, NodeHttpAddress: localhost:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 clusterResource=<memory:8192, vCores:8>
2018-01-07 02:21:13,327 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=2
2018-01-07 02:21:13,327 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:3072, vCores:2> cluster=<memory:8192, vCores:8>
2018-01-07 02:21:13,650 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000003 Container Transitioned from ALLOCATED to ACQUIRED
2018-01-07 02:21:14,336 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000003 Container Transitioned from ACQUIRED to RUNNING
2018-01-07 02:21:14,654 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: checking for deactivate... 
2018-01-07 02:21:18,358 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000003 Container Transitioned from RUNNING to COMPLETED
2018-01-07 02:21:18,358 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1515262747671_0001_01_000003 in state: COMPLETED event:FINISHED
2018-01-07 02:21:18,358 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000003
2018-01-07 02:21:18,358 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1515262747671_0001_01_000003 of capacity <memory:1024, vCores:1> on host localhost:46399, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available, release resources=true
2018-01-07 02:21:18,358 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:2048, vCores:1> numContainers=1 user=root user-resources=<memory:2048, vCores:1>
2018-01-07 02:21:18,359 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1515262747671_0001_01_000003, NodeId: localhost:46399, NodeHttpAddress: localhost:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: 127.0.0.1:46399 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 cluster=<memory:8192, vCores:8>
2018-01-07 02:21:18,360 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2018-01-07 02:21:18,360 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2018-01-07 02:21:18,360 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1515262747671_0001_000001 released container container_1515262747671_0001_01_000003 on node: host: localhost:46399 #containers=1 available=6144 used=2048 with event: FINISHED
2018-01-07 02:21:20,365 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000004 Container Transitioned from NEW to ALLOCATED
2018-01-07 02:21:20,365 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000004
2018-01-07 02:21:20,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1515262747671_0001_01_000004 of capacity <memory:1024, vCores:1> on host localhost:46399, which has 2 containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6> available after allocation
2018-01-07 02:21:20,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1515262747671_0001_000001 container=Container: [ContainerId: container_1515262747671_0001_01_000004, NodeId: localhost:46399, NodeHttpAddress: localhost:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 clusterResource=<memory:8192, vCores:8>
2018-01-07 02:21:20,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=2
2018-01-07 02:21:20,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:3072, vCores:2> cluster=<memory:8192, vCores:8>
2018-01-07 02:21:20,700 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000004 Container Transitioned from ALLOCATED to ACQUIRED
2018-01-07 02:21:21,370 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000004 Container Transitioned from ACQUIRED to RUNNING
2018-01-07 02:21:21,710 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: checking for deactivate... 
2018-01-07 02:21:25,400 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000004 Container Transitioned from RUNNING to COMPLETED
2018-01-07 02:21:25,400 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1515262747671_0001_01_000004 in state: COMPLETED event:FINISHED
2018-01-07 02:21:25,400 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000004
2018-01-07 02:21:25,400 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1515262747671_0001_01_000004 of capacity <memory:1024, vCores:1> on host localhost:46399, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available, release resources=true
2018-01-07 02:21:25,400 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:2048, vCores:1> numContainers=1 user=root user-resources=<memory:2048, vCores:1>
2018-01-07 02:21:25,401 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1515262747671_0001_01_000004, NodeId: localhost:46399, NodeHttpAddress: localhost:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: 127.0.0.1:46399 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 cluster=<memory:8192, vCores:8>
2018-01-07 02:21:25,401 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2018-01-07 02:21:25,401 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2018-01-07 02:21:25,402 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1515262747671_0001_000001 released container container_1515262747671_0001_01_000004 on node: host: localhost:46399 #containers=1 available=6144 used=2048 with event: FINISHED
2018-01-07 02:21:26,733 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: blacklist are updated in Scheduler.blacklistAdditions: [localhost], blacklistRemovals: []
2018-01-07 02:21:27,737 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: blacklist are updated in Scheduler.blacklistAdditions: [], blacklistRemovals: [localhost]
2018-01-07 02:21:28,408 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000005 Container Transitioned from NEW to ALLOCATED
2018-01-07 02:21:28,408 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000005
2018-01-07 02:21:28,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1515262747671_0001_01_000005 of capacity <memory:1024, vCores:1> on host localhost:46399, which has 2 containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6> available after allocation
2018-01-07 02:21:28,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1515262747671_0001_000001 container=Container: [ContainerId: container_1515262747671_0001_01_000005, NodeId: localhost:46399, NodeHttpAddress: localhost:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 clusterResource=<memory:8192, vCores:8>
2018-01-07 02:21:28,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=2
2018-01-07 02:21:28,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:3072, vCores:2> cluster=<memory:8192, vCores:8>
2018-01-07 02:21:28,741 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000005 Container Transitioned from ALLOCATED to ACQUIRED
2018-01-07 02:21:29,413 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000005 Container Transitioned from ACQUIRED to RUNNING
2018-01-07 02:21:29,763 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: checking for deactivate... 
2018-01-07 02:21:33,456 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000005 Container Transitioned from RUNNING to COMPLETED
2018-01-07 02:21:33,456 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1515262747671_0001_01_000005 in state: COMPLETED event:FINISHED
2018-01-07 02:21:33,457 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000005
2018-01-07 02:21:33,457 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1515262747671_0001_01_000005 of capacity <memory:1024, vCores:1> on host localhost:46399, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available, release resources=true
2018-01-07 02:21:33,457 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:2048, vCores:1> numContainers=1 user=root user-resources=<memory:2048, vCores:1>
2018-01-07 02:21:33,457 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1515262747671_0001_01_000005, NodeId: localhost:46399, NodeHttpAddress: localhost:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: 127.0.0.1:46399 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 cluster=<memory:8192, vCores:8>
2018-01-07 02:21:33,457 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2018-01-07 02:21:33,457 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2018-01-07 02:21:33,458 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1515262747671_0001_000001 released container container_1515262747671_0001_01_000005 on node: host: localhost:46399 #containers=1 available=6144 used=2048 with event: FINISHED
2018-01-07 02:21:34,126 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1515262747671_0001_000001 with final state: FINISHING, and exit status: -1000
2018-01-07 02:21:34,129 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1515262747671_0001_000001 State change from RUNNING to FINAL_SAVING
2018-01-07 02:21:34,129 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1515262747671_0001 with final state: FINISHING
2018-01-07 02:21:34,132 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1515262747671_0001 State change from RUNNING to FINAL_SAVING
2018-01-07 02:21:34,132 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1515262747671_0001
2018-01-07 02:21:34,133 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1515262747671_0001_000001 State change from FINAL_SAVING to FINISHING
2018-01-07 02:21:34,135 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1515262747671_0001 State change from FINAL_SAVING to FINISHING
2018-01-07 02:21:41,476 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1515262747671_0001_01_000001 Container Transitioned from RUNNING to COMPLETED
2018-01-07 02:21:41,476 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1515262747671_0001_01_000001 in state: COMPLETED event:FINISHED
2018-01-07 02:21:41,476 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000001
2018-01-07 02:21:41,476 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1515262747671_0001_01_000001 of capacity <memory:2048, vCores:1> on host localhost:46399, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2018-01-07 02:21:41,476 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=root user-resources=<memory:0, vCores:0>
2018-01-07 02:21:41,476 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1515262747671_0001_01_000001, NodeId: localhost:46399, NodeHttpAddress: localhost:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 127.0.0.1:46399 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2018-01-07 02:21:41,476 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2018-01-07 02:21:41,476 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2018-01-07 02:21:41,477 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1515262747671_0001_000001 released container container_1515262747671_0001_01_000001 on node: host: localhost:46399 #containers=0 available=8192 used=0 with event: FINISHED
2018-01-07 02:21:41,477 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1515262747671_0001_000001
2018-01-07 02:21:41,479 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1515262747671_0001_000001 State change from FINISHING to FINISHED
2018-01-07 02:21:41,486 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1515262747671_0001 State change from FINISHING to FINISHED
2018-01-07 02:21:41,487 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1515262747671_0001_000001 is done. finalState=FINISHED
2018-01-07 02:21:41,487 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1515262747671_0001 requests cleared
2018-01-07 02:21:41,487 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1515262747671_0001 user: root queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2018-01-07 02:21:41,487 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1515262747671_0001 user: root leaf-queue of parent: root #applications: 0
2018-01-07 02:21:41,487 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1515262747671_0001_000001
2018-01-07 02:21:41,496 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=Application Finished - Succeeded	TARGET=RMAppManager	RESULT=SUCCESS	APPID=application_1515262747671_0001
2018-01-07 02:21:41,500 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1515262747671_0001,name=word count,user=root,queue=default,state=FINISHED,trackingUrl=http://localhost:8088/proxy/application_1515262747671_0001/jobhistory/job/job_1515262747671_0001,appMasterHost=xiancheng,startTime=1515262854530,finishTime=1515262894129,finalStatus=FAILED
再查看nodemanger日志信息,发现只有一个和控制台差不多的错误信息:
2018-01-07 02:15:00,740 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Public cache exiting
2018-01-07 02:19:11,900 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NodeManager
STARTUP_MSG:   host = xiancheng/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.5.0
STARTUP_MSG:   classpath = /opt/hadoop-2.5.0/etc/hadoop:/opt/hadoop-2.5.0/etc/hadoop:/opt/hadoop-2.5.0/etc/hadoop:/opt/hadoop-2.5.0/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/junit-4.11.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/hadoop-annotations-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/hadoop-auth-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-el-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop-2.5.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-2.5.0/share/hadoop/common/hadoop-common-2.5.0-tests.jar:/opt/hadoop-2.5.0/share/hadoop/common/hadoop-common-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/common/hadoop-nfs-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/hadoop-hdfs-2.5.0-tests.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/hdfs/hadoop-hdfs-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-common-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-client-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-api-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-common-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.0-tests.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-common-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-client-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-api-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-common-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/opt/hadoop-2.5.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-2.5.0/etc/hadoop/nm-config/log4j.properties
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1616291; compiled by 'jenkins' on 2014-08-06T17:31Z
STARTUP_MSG:   java = 1.7.0_80
************************************************************/
2018-01-07 02:19:11,943 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: registered UNIX signal handlers for [TERM, HUP, INT]
2018-01-07 02:19:15,013 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher
2018-01-07 02:19:15,015 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher
2018-01-07 02:19:15,016 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizationEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService
2018-01-07 02:19:15,017 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices
2018-01-07 02:19:15,017 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
2018-01-07 02:19:15,018 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher
2018-01-07 02:19:15,061 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.ContainerManagerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl
2018-01-07 02:19:15,062 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.NodeManagerEventType for class org.apache.hadoop.yarn.server.nodemanager.NodeManager
2018-01-07 02:19:15,225 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2018-01-07 02:19:18,433 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2018-01-07 02:19:18,442 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system started
2018-01-07 02:19:19,165 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService
2018-01-07 02:19:19,165 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: per directory file limit = 8192
2018-01-07 02:19:19,820 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: usercache path : file:/opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache_DEL_1515262759181
2018-01-07 02:19:19,934 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting path : file:/opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache_DEL_1515262759181/root
2018-01-07 02:19:20,190 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker
2018-01-07 02:19:23,682 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: The Auxilurary Service named 'mapreduce_shuffle' in the configuration is for class class org.apache.hadoop.mapred.ShuffleHandler which has a name of 'httpshuffle'. Because these are not the same tools trying to send ServiceData and read Service Meta Data may have issues unless the refer to the name in the config.
2018-01-07 02:19:23,705 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Adding auxiliary service httpshuffle, "mapreduce_shuffle"
2018-01-07 02:19:25,789 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:  Using ResourceCalculatorPlugin : org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin@939bc30
2018-01-07 02:19:25,839 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:  Using ResourceCalculatorProcessTree : null
2018-01-07 02:19:25,839 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Physical memory check enabled: true
2018-01-07 02:19:25,839 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Virtual memory check enabled: true
2018-01-07 02:19:26,363 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: NodeManager configured with 8 G physical memory allocated to containers, which is more than 80% of the total physical memory available (992.7 M). Thrashing might happen.
2018-01-07 02:19:26,401 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Initialized nodemanager for null: physical-memory=8192 virtual-memory=17204 virtual-cores=8
2018-01-07 02:19:37,795 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2018-01-07 02:19:47,621 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 46399
2018-01-07 02:19:52,061 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ContainerManagementProtocolPB to the server
2018-01-07 02:19:52,067 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Blocking new container-requests as container manager rpc server is still starting.
2018-01-07 02:19:52,580 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-01-07 02:19:55,202 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 46399: starting
2018-01-07 02:19:57,099 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecretManager: Updating node address : localhost:46399
2018-01-07 02:19:57,106 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: ContainerManager started at xiancheng/127.0.0.1:46399
2018-01-07 02:19:57,138 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2018-01-07 02:19:57,139 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8040
2018-01-07 02:19:57,142 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB to the server
2018-01-07 02:19:57,144 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-01-07 02:19:57,144 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8040: starting
2018-01-07 02:19:57,145 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Localizer started on port 8040
2018-01-07 02:20:00,883 INFO org.apache.hadoop.mapred.IndexCache: IndexCache created with max memory = 10485760
2018-01-07 02:20:04,552 INFO org.apache.hadoop.mapred.ShuffleHandler: httpshuffle listening on port 13562
2018-01-07 02:20:04,608 INFO org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer: Instantiating NMWebApp at 0.0.0.0:8042
2018-01-07 02:20:04,877 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2018-01-07 02:20:04,881 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.nodemanager is not defined
2018-01-07 02:20:04,924 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2018-01-07 02:20:04,929 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context node
2018-01-07 02:20:04,929 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2018-01-07 02:20:04,929 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2018-01-07 02:20:04,967 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /node/*
2018-01-07 02:20:04,967 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2018-01-07 02:20:05,127 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 8042
2018-01-07 02:20:05,128 INFO org.mortbay.log: jetty-6.1.26
2018-01-07 02:20:05,516 INFO org.mortbay.log: Extract jar:file:/opt/hadoop-2.5.0/share/hadoop/yarn/hadoop-yarn-common-2.5.0.jar!/webapps/node to /tmp/Jetty_0_0_0_0_8042_node____19tj0x/webapp
2018-01-07 02:20:19,350 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:8042
2018-01-07 02:20:19,399 INFO org.apache.hadoop.yarn.webapp.WebApps: Web app /node started at 8042
2018-01-07 02:20:30,264 INFO org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2018-01-07 02:20:30,325 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at xiancheng2/0.0.0.0:8031
2018-01-07 02:20:30,430 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 0 NM container statuses: []
2018-01-07 02:20:30,440 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registering with RM using containers :[]
2018-01-07 02:20:30,918 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecretManager: Rolling master-key for container-tokens, got key with id 241818933
2018-01-07 02:20:30,930 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerInNM: Rolling master-key for nm-tokens, got key with id :-1516638690
2018-01-07 02:20:30,931 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registered with ResourceManager as localhost:46399 with total resource of <memory:8192, vCores:8>
2018-01-07 02:20:30,931 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Notifying ContainerManager to unblock new container-requests
2018-01-07 02:20:55,295 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1515262747671_0001_000001 (auth:SIMPLE)
2018-01-07 02:20:55,467 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1515262747671_0001_01_000001 by user root
2018-01-07 02:20:55,497 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1515262747671_0001
2018-01-07 02:20:55,504 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root	IP=127.0.0.1	OPERATION=Start Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000001
2018-01-07 02:20:55,513 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1515262747671_0001 transitioned from NEW to INITING
2018-01-07 02:20:55,514 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1515262747671_0001_01_000001 to application application_1515262747671_0001
2018-01-07 02:20:56,591 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService: Remote Root Log Dir [/tmp/logs] does not exist. Attempting to create it.
2018-01-07 02:20:56,746 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1515262747671_0001 transitioned from INITING to RUNNING
2018-01-07 02:20:56,750 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000001 transitioned from NEW to LOCALIZING
2018-01-07 02:20:56,750 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1515262747671_0001
2018-01-07 02:20:56,763 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8020/tmp/hadoop-yarn/staging/root/.staging/job_1515262747671_0001/job.jar transitioned from INIT to DOWNLOADING
2018-01-07 02:20:56,763 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8020/tmp/hadoop-yarn/staging/root/.staging/job_1515262747671_0001/job.splitmetainfo transitioned from INIT to DOWNLOADING
2018-01-07 02:20:56,764 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8020/tmp/hadoop-yarn/staging/root/.staging/job_1515262747671_0001/job.split transitioned from INIT to DOWNLOADING
2018-01-07 02:20:56,764 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8020/tmp/hadoop-yarn/staging/root/.staging/job_1515262747671_0001/job.xml transitioned from INIT to DOWNLOADING
2018-01-07 02:20:56,764 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1515262747671_0001_01_000001
2018-01-07 02:20:56,923 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /opt/hadoop-2.5.0/data/tmp/nm-local-dir/nmPrivate/container_1515262747671_0001_01_000001.tokens. Credentials list: 
2018-01-07 02:20:56,947 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user root
2018-01-07 02:20:57,011 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /opt/hadoop-2.5.0/data/tmp/nm-local-dir/nmPrivate/container_1515262747671_0001_01_000001.tokens to /opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/container_1515262747671_0001_01_000001.tokens
2018-01-07 02:20:57,011 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: CWD set to /opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001 = file:/opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001
2018-01-07 02:20:57,640 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8020/tmp/hadoop-yarn/staging/root/.staging/job_1515262747671_0001/job.jar(->/opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/filecache/10/job.jar) transitioned from DOWNLOADING to LOCALIZED
2018-01-07 02:20:57,671 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8020/tmp/hadoop-yarn/staging/root/.staging/job_1515262747671_0001/job.splitmetainfo(->/opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/filecache/11/job.splitmetainfo) transitioned from DOWNLOADING to LOCALIZED
2018-01-07 02:20:57,723 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8020/tmp/hadoop-yarn/staging/root/.staging/job_1515262747671_0001/job.split(->/opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/filecache/12/job.split) transitioned from DOWNLOADING to LOCALIZED
2018-01-07 02:20:57,741 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8020/tmp/hadoop-yarn/staging/root/.staging/job_1515262747671_0001/job.xml(->/opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/filecache/13/job.xml) transitioned from DOWNLOADING to LOCALIZED
2018-01-07 02:20:57,742 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000001 transitioned from LOCALIZING to LOCALIZED
2018-01-07 02:20:57,785 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000001 transitioned from LOCALIZED to RUNNING
2018-01-07 02:20:57,787 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1515262747671_0001_01_000001
2018-01-07 02:20:57,790 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [nice, -n, 0, bash, /opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/container_1515262747671_0001_01_000001/default_container_executor.sh]
2018-01-07 02:21:00,893 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3327 for container-id container_1515262747671_0001_01_000001: 83.5 MB of 2 GB physical memory used; 1.6 GB of 4.2 GB virtual memory used
2018-01-07 02:21:03,962 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3327 for container-id container_1515262747671_0001_01_000001: 102.6 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2018-01-07 02:21:06,759 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1515262747671_0001_000001 (auth:SIMPLE)
2018-01-07 02:21:06,763 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1515262747671_0001_01_000002 by user root
2018-01-07 02:21:06,763 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root	IP=127.0.0.1	OPERATION=Start Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000002
2018-01-07 02:21:06,764 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1515262747671_0001_01_000002 to application application_1515262747671_0001
2018-01-07 02:21:06,765 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000002 transitioned from NEW to LOCALIZING
2018-01-07 02:21:06,765 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1515262747671_0001
2018-01-07 02:21:06,765 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_INIT for appId application_1515262747671_0001
2018-01-07 02:21:06,765 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got APPLICATION_INIT for service mapreduce_shuffle
2018-01-07 02:21:06,768 INFO org.apache.hadoop.mapred.ShuffleHandler: Added token for job_1515262747671_0001
2018-01-07 02:21:06,768 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000002 transitioned from LOCALIZING to LOCALIZED
2018-01-07 02:21:06,820 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [nice, -n, 0, bash, /opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/container_1515262747671_0001_01_000002/default_container_executor.sh]
2018-01-07 02:21:06,830 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000002 transitioned from LOCALIZED to RUNNING
2018-01-07 02:21:06,969 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1515262747671_0001_01_000002
2018-01-07 02:21:07,015 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3327 for container-id container_1515262747671_0001_01_000001: 125.1 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2018-01-07 02:21:07,075 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3467 for container-id container_1515262747671_0001_01_000002: 30.9 MB of 1 GB physical memory used; 789.6 MB of 2.1 GB virtual memory used
2018-01-07 02:21:10,094 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3327 for container-id container_1515262747671_0001_01_000001: 119.9 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2018-01-07 02:21:10,121 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3467 for container-id container_1515262747671_0001_01_000002: 84.0 MB of 1 GB physical memory used; 813.6 MB of 2.1 GB virtual memory used
2018-01-07 02:21:11,016 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1515262747671_0001_01_000002 is : 1
2018-01-07 02:21:11,025 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1515262747671_0001_01_000002 and exit code: 1
ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
	at org.apache.hadoop.util.Shell.run(Shell.java:455)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
2018-01-07 02:21:11,032 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 
2018-01-07 02:21:11,033 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 1
2018-01-07 02:21:11,034 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000002 transitioned from RUNNING to EXITED_WITH_FAILURE
2018-01-07 02:21:11,034 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1515262747671_0001_01_000002
2018-01-07 02:21:11,074 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/container_1515262747671_0001_01_000002
2018-01-07 02:21:11,069 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root	OPERATION=Container Finished - Failed	TARGET=ContainerImpl	RESULT=FAILURE	DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000002
2018-01-07 02:21:11,080 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000002 transitioned from EXITED_WITH_FAILURE to DONE
2018-01-07 02:21:11,080 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1515262747671_0001_01_000002 from application application_1515262747671_0001
2018-01-07 02:21:11,080 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Considering container container_1515262747671_0001_01_000002 for log-aggregation
2018-01-07 02:21:11,080 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1515262747671_0001
2018-01-07 02:21:11,322 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1515262747671_0001_01_000002]
2018-01-07 02:21:11,657 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1515262747671_0001_01_000002
2018-01-07 02:21:13,122 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1515262747671_0001_01_000002
2018-01-07 02:21:13,134 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3327 for container-id container_1515262747671_0001_01_000001: 120.1 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2018-01-07 02:21:13,663 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1515262747671_0001_01_000003 by user root
2018-01-07 02:21:13,663 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root	IP=127.0.0.1	OPERATION=Start Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000003
2018-01-07 02:21:13,663 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1515262747671_0001_01_000003 to application application_1515262747671_0001
2018-01-07 02:21:13,664 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000003 transitioned from NEW to LOCALIZING
2018-01-07 02:21:13,664 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1515262747671_0001
2018-01-07 02:21:13,664 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_INIT for appId application_1515262747671_0001
2018-01-07 02:21:13,664 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got APPLICATION_INIT for service mapreduce_shuffle
2018-01-07 02:21:13,664 INFO org.apache.hadoop.mapred.ShuffleHandler: Added token for job_1515262747671_0001
2018-01-07 02:21:13,665 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000003 transitioned from LOCALIZING to LOCALIZED
2018-01-07 02:21:13,706 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [nice, -n, 0, bash, /opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/container_1515262747671_0001_01_000003/default_container_executor.sh]
2018-01-07 02:21:13,707 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000003 transitioned from LOCALIZED to RUNNING
2018-01-07 02:21:16,141 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1515262747671_0001_01_000003
2018-01-07 02:21:16,150 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3327 for container-id container_1515262747671_0001_01_000001: 120.1 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2018-01-07 02:21:16,185 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3530 for container-id container_1515262747671_0001_01_000003: 71.7 MB of 1 GB physical memory used; 807.1 MB of 2.1 GB virtual memory used
2018-01-07 02:21:18,323 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1515262747671_0001_01_000003 is : 1
2018-01-07 02:21:18,323 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1515262747671_0001_01_000003 and exit code: 1
ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
	at org.apache.hadoop.util.Shell.run(Shell.java:455)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
2018-01-07 02:21:18,323 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 
2018-01-07 02:21:18,323 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 1
2018-01-07 02:21:18,324 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000003 transitioned from RUNNING to EXITED_WITH_FAILURE
2018-01-07 02:21:18,324 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1515262747671_0001_01_000003
2018-01-07 02:21:18,341 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root	OPERATION=Container Finished - Failed	TARGET=ContainerImpl	RESULT=FAILURE	DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000003
2018-01-07 02:21:18,341 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000003 transitioned from EXITED_WITH_FAILURE to DONE
2018-01-07 02:21:18,341 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1515262747671_0001_01_000003 from application application_1515262747671_0001
2018-01-07 02:21:18,341 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Considering container container_1515262747671_0001_01_000003 for log-aggregation
2018-01-07 02:21:18,341 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1515262747671_0001
2018-01-07 02:21:18,341 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/container_1515262747671_0001_01_000003
2018-01-07 02:21:18,359 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1515262747671_0001_01_000003]
2018-01-07 02:21:18,674 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1515262747671_0001_01_000003
2018-01-07 02:21:19,185 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1515262747671_0001_01_000003
2018-01-07 02:21:19,196 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3327 for container-id container_1515262747671_0001_01_000001: 120.1 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2018-01-07 02:21:20,716 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1515262747671_0001_01_000004 by user root
2018-01-07 02:21:20,717 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root	IP=127.0.0.1	OPERATION=Start Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000004
2018-01-07 02:21:20,717 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1515262747671_0001_01_000004 to application application_1515262747671_0001
2018-01-07 02:21:20,717 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000004 transitioned from NEW to LOCALIZING
2018-01-07 02:21:20,718 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1515262747671_0001
2018-01-07 02:21:20,718 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_INIT for appId application_1515262747671_0001
2018-01-07 02:21:20,718 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got APPLICATION_INIT for service mapreduce_shuffle
2018-01-07 02:21:20,718 INFO org.apache.hadoop.mapred.ShuffleHandler: Added token for job_1515262747671_0001
2018-01-07 02:21:20,718 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000004 transitioned from LOCALIZING to LOCALIZED
2018-01-07 02:21:20,788 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000004 transitioned from LOCALIZED to RUNNING
2018-01-07 02:21:20,799 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [nice, -n, 0, bash, /opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/container_1515262747671_0001_01_000004/default_container_executor.sh]
2018-01-07 02:21:22,197 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1515262747671_0001_01_000004
2018-01-07 02:21:22,207 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3327 for container-id container_1515262747671_0001_01_000001: 120.4 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2018-01-07 02:21:22,247 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3586 for container-id container_1515262747671_0001_01_000004: 53.2 MB of 1 GB physical memory used; 801.8 MB of 2.1 GB virtual memory used
2018-01-07 02:21:24,944 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1515262747671_0001_01_000004 is : 1
2018-01-07 02:21:24,944 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1515262747671_0001_01_000004 and exit code: 1
ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
	at org.apache.hadoop.util.Shell.run(Shell.java:455)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
2018-01-07 02:21:24,944 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 
2018-01-07 02:21:24,945 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 1
2018-01-07 02:21:24,945 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000004 transitioned from RUNNING to EXITED_WITH_FAILURE
2018-01-07 02:21:24,945 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1515262747671_0001_01_000004
2018-01-07 02:21:24,979 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root	OPERATION=Container Finished - Failed	TARGET=ContainerImpl	RESULT=FAILURE	DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000004
2018-01-07 02:21:24,979 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000004 transitioned from EXITED_WITH_FAILURE to DONE
2018-01-07 02:21:24,979 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1515262747671_0001_01_000004 from application application_1515262747671_0001
2018-01-07 02:21:24,979 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Considering container container_1515262747671_0001_01_000004 for log-aggregation
2018-01-07 02:21:24,979 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1515262747671_0001
2018-01-07 02:21:24,979 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/container_1515262747671_0001_01_000004
2018-01-07 02:21:25,248 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1515262747671_0001_01_000004
2018-01-07 02:21:25,259 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3327 for container-id container_1515262747671_0001_01_000001: 120.4 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2018-01-07 02:21:25,401 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1515262747671_0001_01_000004]
2018-01-07 02:21:25,737 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1515262747671_0001_01_000004
2018-01-07 02:21:28,269 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3327 for container-id container_1515262747671_0001_01_000001: 120.4 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2018-01-07 02:21:28,749 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1515262747671_0001_01_000005 by user root
2018-01-07 02:21:28,750 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root	IP=127.0.0.1	OPERATION=Start Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000005
2018-01-07 02:21:28,751 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1515262747671_0001_01_000005 to application application_1515262747671_0001
2018-01-07 02:21:28,751 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000005 transitioned from NEW to LOCALIZING
2018-01-07 02:21:28,751 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1515262747671_0001
2018-01-07 02:21:28,751 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_INIT for appId application_1515262747671_0001
2018-01-07 02:21:28,751 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got APPLICATION_INIT for service mapreduce_shuffle
2018-01-07 02:21:28,752 INFO org.apache.hadoop.mapred.ShuffleHandler: Added token for job_1515262747671_0001
2018-01-07 02:21:28,752 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000005 transitioned from LOCALIZING to LOCALIZED
2018-01-07 02:21:28,799 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000005 transitioned from LOCALIZED to RUNNING
2018-01-07 02:21:28,804 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [nice, -n, 0, bash, /opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/container_1515262747671_0001_01_000005/default_container_executor.sh]
2018-01-07 02:21:31,272 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1515262747671_0001_01_000005
2018-01-07 02:21:31,281 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3327 for container-id container_1515262747671_0001_01_000001: 120.4 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2018-01-07 02:21:31,320 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3646 for container-id container_1515262747671_0001_01_000005: 78.4 MB of 1 GB physical memory used; 807.2 MB of 2.1 GB virtual memory used
2018-01-07 02:21:33,335 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1515262747671_0001_01_000005 is : 1
2018-01-07 02:21:33,335 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1515262747671_0001_01_000005 and exit code: 1
ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
	at org.apache.hadoop.util.Shell.run(Shell.java:455)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
2018-01-07 02:21:33,335 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 
2018-01-07 02:21:33,335 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 1
2018-01-07 02:21:33,335 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000005 transitioned from RUNNING to EXITED_WITH_FAILURE
2018-01-07 02:21:33,335 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1515262747671_0001_01_000005
2018-01-07 02:21:33,351 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root	OPERATION=Container Finished - Failed	TARGET=ContainerImpl	RESULT=FAILURE	DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000005
2018-01-07 02:21:33,351 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000005 transitioned from EXITED_WITH_FAILURE to DONE
2018-01-07 02:21:33,351 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1515262747671_0001_01_000005 from application application_1515262747671_0001
2018-01-07 02:21:33,351 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Considering container container_1515262747671_0001_01_000005 for log-aggregation
2018-01-07 02:21:33,351 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1515262747671_0001
2018-01-07 02:21:33,352 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/container_1515262747671_0001_01_000005
2018-01-07 02:21:33,458 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1515262747671_0001_01_000005]
2018-01-07 02:21:33,799 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1515262747671_0001_01_000005
2018-01-07 02:21:34,320 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1515262747671_0001_01_000005
2018-01-07 02:21:34,332 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3327 for container-id container_1515262747671_0001_01_000001: 121.3 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2018-01-07 02:21:37,341 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3327 for container-id container_1515262747671_0001_01_000001: 122.1 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2018-01-07 02:21:40,351 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 3327 for container-id container_1515262747671_0001_01_000001: 122.0 MB of 2 GB physical memory used; 1.6 GB of 4.2 GB virtual memory used
2018-01-07 02:21:40,516 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container container_1515262747671_0001_01_000001 succeeded 
2018-01-07 02:21:40,516 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000001 transitioned from RUNNING to EXITED_WITH_SUCCESS
2018-01-07 02:21:40,516 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1515262747671_0001_01_000001
2018-01-07 02:21:40,533 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root	OPERATION=Container Finished - Succeeded	TARGET=ContainerImpl	RESULT=SUCCESS	APPID=application_1515262747671_0001	CONTAINERID=container_1515262747671_0001_01_000001
2018-01-07 02:21:40,533 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1515262747671_0001_01_000001 transitioned from EXITED_WITH_SUCCESS to DONE
2018-01-07 02:21:40,533 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1515262747671_0001_01_000001 from application application_1515262747671_0001
2018-01-07 02:21:40,533 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Considering container container_1515262747671_0001_01_000001 for log-aggregation
2018-01-07 02:21:40,533 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1515262747671_0001
2018-01-07 02:21:40,534 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001/container_1515262747671_0001_01_000001
2018-01-07 02:21:41,478 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1515262747671_0001_01_000001]
2018-01-07 02:21:41,495 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1515262747671_0001_000001 (auth:SIMPLE)
2018-01-07 02:21:41,500 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1515262747671_0001_01_000001
2018-01-07 02:21:42,489 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1515262747671_0001 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2018-01-07 02:21:42,489 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1515262747671_0001
2018-01-07 02:21:42,490 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1515262747671_0001 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2018-01-07 02:21:42,490 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Application just finished : application_1515262747671_0001
2018-01-07 02:21:42,490 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /opt/hadoop-2.5.0/data/tmp/nm-local-dir/usercache/root/appcache/application_1515262747671_0001
2018-01-07 02:21:42,491 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Starting aggregate log-file for app application_1515262747671_0001 at /tmp/logs/root/logs/application_1515262747671_0001/localhost_46399.tmp
2018-01-07 02:21:42,592 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Uploading logs for container container_1515262747671_0001_01_000002. Current good log dirs are /opt/hadoop-2.5.0/logs/userlogs
2018-01-07 02:21:42,621 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Uploading logs for container container_1515262747671_0001_01_000003. Current good log dirs are /opt/hadoop-2.5.0/logs/userlogs
2018-01-07 02:21:42,621 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Uploading logs for container container_1515262747671_0001_01_000004. Current good log dirs are /opt/hadoop-2.5.0/logs/userlogs
2018-01-07 02:21:42,621 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Uploading logs for container container_1515262747671_0001_01_000005. Current good log dirs are /opt/hadoop-2.5.0/logs/userlogs
2018-01-07 02:21:42,622 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Uploading logs for container container_1515262747671_0001_01_000001. Current good log dirs are /opt/hadoop-2.5.0/logs/userlogs
2018-01-07 02:21:42,634 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting path : /opt/hadoop-2.5.0/logs/userlogs/application_1515262747671_0001
2018-01-07 02:21:42,857 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Finished aggregate log-file for app application_1515262747671_0001
2018-01-07 02:21:43,351 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1515262747671_0001_01_000001
看到这里的时候,我把nodemanager日志信息中的错误搜索谷歌和百度都找不到解决的办法。因为我开启了日志聚集功能,我后来在historyserver页面中发现如下信息:

我们可以看到,这个任务在maps任务的时候就出错了,并没有运行到reduce,于是我去看map任务的日志:


此时,终于发现了问题的所在,再次证明日志信息很重要,原来是内存不够,任务被迫终止:

因为我的服务器是很渣的,内存很少,所以解决办法也很容易知道了。
解决办法:(1)没什么困难是钱解决不了的,可以买一个好的机器哈哈哈哈。
(2)增加一个swap的分区,相比直接的物理内存会慢些,但没钱只能这样啦。
(3)我现在用的办法,由于我的机器的原因,没有swap分区,懒了点,方法如下:

此时
内存的问题就解决了,只不过这个办法重启后就不行了,因为这个当作内存的文件会被删除。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值