yarn-similar logs when starting up container

本文详细解析了YARN配置环境及Spark Executor的启动过程,包括环境变量、命令参数、容器启动方式等关键信息,为深入理解YARN与Spark集成提供了详细指导。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

15/12/09 16:47:52 INFO yarn.ExecutorRunnable: Setting up executor with environment: Map(CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/s
hare/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/
share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*, SPARK_LOG_UR
L_STDERR -> http://gzsw-11:8042/node/containerlogs/container_1441038159113_0018_01_000003/hadoop/stderr?start=0, SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1441038159113_00
18, SPARK_YARN_CACHE_FILES_FILE_SIZES -> 157585209, SPARK_USER -> hadoop, SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE, SPARK_YARN_MODE -> true, SPARK_YARN_CACHE_FILES_TIME_STAMPS
 -> 1449650858015, SPARK_LOG_URL_STDOUT -> http://gzsw-11:8042/node/containerlogs/container_1441038159113_0018_01_000003/hadoop/stdout?start=0, SPARK_YARN_CACHE_FILES -> hdfs://mycl
uster/user/hadoop/.sparkStaging/application_1441038159113_0018/spark-assembly-1.4.1-hadoop2.4.0.jar#__spark__.jar)

15/12/09 16:47:52 INFO yarn.ExecutorRunnable: Setting up executor with commands: List({{JAVA_HOME}}/bin/java, -server, -XX:OnOutOfMemoryError='kill %p', -Xms2048m, -Xmx2048m, '-Xlog
gc:~/spark-executor.gc', '-XX:+UseCMSCompactAtFullCollection', '-XX:CMSFullGCsBeforeCompaction=2', '-XX:CMSInitiatingOccupancyFraction=65', '-XX:+UseCMSInitiatingOccupancyOnly', '-X
X:PermSize=64m', '-XX:MaxPermSize=256m', '-XX:NewRatio=5', '-XX:+UseParNewGC', '-XX:+UseConcMarkSweepGC', '-XX:+PrintGCDateStamps', '-XX:+PrintGCDetails', '-XX:ParallelGCThreads=5',
 -Djava.io.tmpdir={{PWD}}/tmp, '-Dspark.master.ui.port=7102', '-Dspark.worker.ui.port=7105', '-Dspark.ui.port=7106', '-Dspark.driver.port=56894', -Dspark.yarn.app.container.log.dir=
<LOG_DIR>, org.apache.spark.executor.CoarseGrainedExecutorBackend, --driver-url, akka.tcp://sparkDriver@192.168.100.4:56894/user/CoarseGrainedScheduler, --executor-id, 2, --hostname
, gzsw-11, --cores, 2, --app-id, application_1441038159113_0018, --user-class-path, file:$PWD/__app__.jar, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)

   by one more checking ,that's one launched command for executor,and here one container will startup as executor says CoarseGrainedExecutorBackend,ie same jvm with container instead of spawning up new jvm process.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值